algorithmic market design a dissertation submitted …xf731pn2513/thesis-akbarpour... · approved...

147
ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ECONOMICS AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Mohammad Akbarpour June 2015

Upload: others

Post on 16-Aug-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

ALGORITHMIC MARKET DESIGN

A DISSERTATION

SUBMITTED TO THE DEPARTMENT OF ECONOMICS

AND THE COMMITTEE ON GRADUATE STUDIES

OF STANFORD UNIVERSITY

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

FOR THE DEGREE OF

DOCTOR OF PHILOSOPHY

Mohammad Akbarpour

June 2015

Page 2: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

http://creativecommons.org/licenses/by-nc/3.0/us/

This dissertation is online at: http://purl.stanford.edu/xf731pn2513

© 2015 by Mohammad Akbarpour. All Rights Reserved.

Re-distributed by Stanford University under license with the author.

This work is licensed under a Creative Commons Attribution-Noncommercial 3.0 United States License.

ii

Page 3: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.

Paul Milgrom, Primary Adviser

I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.

Matthew Jackson

I certify that I have read this dissertation and that, in my opinion, it is fully adequatein scope and quality as a dissertation for the degree of Doctor of Philosophy.

Alvin Roth

Approved for the Stanford University Committee on Graduate Studies.

Patricia J. Gumport, Vice Provost for Graduate Education

This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file inUniversity Archives.

iii

Page 4: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

Abstract

This thesis consists of two essays that exploit algorithmic techniques to solve two

matching market design problems.

The first essay introduces a simple benchmark model of dynamic matching in

networked markets, where agents arrive and depart stochastically and the network of

acceptable transactions among agents forms a random graph. The main insight of our

analysis is that waiting to thicken the market can be substantially more important

than increasing the speed of transactions. We also show that naıve local algorithms

that maintain market thickness by choosing the right time to match agents, but do

not exploit global network structure, can perform very close to optimal algorithms.

Finally, our analysis asserts that having information about agents’ departure times

is highly valuable. To elicit agents’ departure times when it is private, we design an

incentive-compatible continuous-time dynamic mechanism without transfers.

The second essay extends the scope of random allocation mechanisms, in which

the mechanism first identifies a feasible “expected allocation” and then implements

it by randomizing over nearby feasible integer allocations. Previous literature had

shown that the cases in which this is possible are sharply limited. We show that if

some of the feasibility constraints can be treated as goals rather than hard constraints

then, subject to weak conditions that we identify, any expected allocation that satis-

fies all the constraints and goals can be implemented by randomizing among nearby

integer allocations that satisfy all the hard constraints exactly and the goals at least

approximately. We show that by adding ex post utility goals to random serial dicta-

torship, we can construct a strategy-proof mechanism with the same ex ante utility

that is nearly ex post fair.

iv

Page 5: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

To:

Those children who are victims of poverty; children who are born in un-

derprivileged families, who are deprived of nurturing childhood developmental oppor-

tunities, and consequently who will not perform as well as their privileged peers in

school. The challenge of these children is not to thrive, but to survive. In a paral-

lel universe, in which there is less poverty and less inequality, they would have had

incredible scientific contributions. The void of the next page is dedicated to those

“missing contributions.”

v

Page 6: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

vi

Page 7: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

Acknowledgements

The opportunity to learn from my advisors has been an extraordinary gift. I am

indebted to Paul Milgrom for being the best advisor anyone could ever imagine. His

continuous encouragement to think out of the box, his incredible depth and breadth of

knowledge in economics and computer science, and his confidence in me were essential

in my academic development. To Al Roth for inspiring me on a daily basis. His door

was always open to intellectual conversations. His unique way of thinking about

markets influenced me in an irreversible way. To Matt Jackson for caring about novel

ideas, as opposed to brute force calculations. He changed my way of thinking about

a valuable research in economics. To Mitch Polinsky for his life-changing trust in

me. And to Paul and Eva, Al and Emilie, Matt and Sara, and Mitch and Joan, for

welcoming students as family members.

I am truly thankful to my parents, Mansoureh and Mohammadali; my sisters,

Maryam and Fatemeh; and my grandparents for their unconditional love. They made

my childhood beautiful and they never stopped loving me.

I am deeply grateful to my wonderful coauthors – Shayan, Shengwu, Afshin, and

Sam – for being great friends, as well as passionate scientists. I learned a lot from

them, and I am looking forward to collaborate with them on many more projects to

come. To tens of economists and computer scientists who inspired me in the past few

years (I name a few of them at the beginning of each Chapter). To my classmates

at Stanford economics department for their significant impact on my mental growth

as an economist. To Ali Naghi Mashayekhi and Masoud Nili for encouraging me to

pursue economics, and to Yahya Tabesh and Amin Saberi for supporting me in my

way from engineering to economics. To Caro Lucas for proving that one can fall

vii

Page 8: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

in love with knowledge – Rest in peace, Caro. To Vahid Karimipour for making

Maxwell’s equations as beautiful as Hafez’s poems. To Amir Asghari for making high

school mathematics as joyful as The Neverhood. To Farhad Meysami for making me

fall in love with scientific thinking.

I am thankful to the wonderful founding team of KelaseDars and Khan Academy

Farsi – Shima, Reza, Sahar, and Alireza. They taught me that real happiness is in

sharing what you have. To all my friends, who together we watched movies, camped

at Tahoe and Yosemite, played football and squash, talked, walked, debated, smiled,

and shared moments, feelings and stories. To name a few (in a random order): Mo-

hammad, Maryam, Nima, Reza, Arash, Nushin, Mohsen, Parastoo, Babak, Alireza,

Mahnoosh, Saeed, Mohsen, Zeinab, Tahereh, Mohammadreza, Azar, Behnam, Hazhir,

Sara, Rad, Mahmoud, Behrad, Neda, Hadi, Hanieh, Hossein, Homeira, Adel, Shayan,

Farnaz, Hamed, Maryam, Hamed, Marzieh, Mohsen, Narges, Shima, Shahin, Leili,

Ian, Soheil, Amin, Farid, Reza, Masoud, Milad, Ali, Kaveh, Leila, Pouyan, Nazanin,

Mohammad, Salman, Sina, Sanam, Soha, Mohammad, Hessam, Kaveh, Keyvan, Ah-

madali, Amir – Thank you all for making me much happier.

Last but, of course, not the least, I am truly thankful to Shima, my best friend

and the kindest supporter, who was there whenever I needed her, made me smile even

when I was sad, and influenced me more than any other person in my life.

viii

Page 9: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

Contents

Abstract iv

Acknowledgements vii

1 Introduction 1

2 Dynamic Matching Markets 6

2.0.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.2 Our Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.2.1 Timing in Matching Markets . . . . . . . . . . . . . . . . . . . 21

2.2.2 Welfare Under Discounting and Optimal Waiting Time . . . . 26

2.2.3 Information and Incentive-Compatibility . . . . . . . . . . . . 29

2.2.4 Technical Contributions . . . . . . . . . . . . . . . . . . . . . 32

2.3 Performance of the Optimum and Periodic Algorithms . . . . . . . . 33

2.3.1 Loss of the Optimum Online Algorithm . . . . . . . . . . . . . 35

2.3.2 Loss of the Omniscient Algorithm . . . . . . . . . . . . . . . . 36

2.4 Modeling an Online Algorithm as a Markov Chain . . . . . . . . . . . 37

2.4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.4.2 Markov Chain Characterization . . . . . . . . . . . . . . . . . 39

2.5 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2.5.1 Loss of the Greedy Algorithm . . . . . . . . . . . . . . . . . . 40

2.5.2 Loss of the Patient Algorithm . . . . . . . . . . . . . . . . . . 45

2.5.3 Loss of the Patient(α) Algorithm . . . . . . . . . . . . . . . . 48

ix

Page 10: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

2.6 Welfare and Optimal Waiting Time under Discounting . . . . . . . . 49

2.6.1 Welfare of the Patient Algorithm . . . . . . . . . . . . . . . . 50

2.6.2 Welfare of the Greedy Algorithm . . . . . . . . . . . . . . . . 52

2.7 Incentive-Compatible Mechanisms . . . . . . . . . . . . . . . . . . . . 52

2.8 Concluding Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . 57

2.8.1 Insights of the Paper . . . . . . . . . . . . . . . . . . . . . . . 57

2.8.2 Discussion of Assumptions . . . . . . . . . . . . . . . . . . . . 58

2.8.3 Further Extensions . . . . . . . . . . . . . . . . . . . . . . . . 59

3 Random Allocation Mechanisms 61

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3.1.1 Model and Contributions . . . . . . . . . . . . . . . . . . . . . 63

3.1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

3.2 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

3.2.1 Approximate Implementation . . . . . . . . . . . . . . . . . . 71

3.3 The Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

3.3.1 The Structure of Soft Blocks . . . . . . . . . . . . . . . . . . . 74

3.3.2 Corollary 1: Fully General Soft Structure . . . . . . . . . . . . 76

3.3.3 Corollary 2: Local Structure . . . . . . . . . . . . . . . . . . . 76

3.3.4 Generalized Structures . . . . . . . . . . . . . . . . . . . . . . 78

3.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

3.4.1 Diversity Requirements in School Choice . . . . . . . . . . . . 80

3.4.2 Distance-based Walk-zone Priorities . . . . . . . . . . . . . . . 82

3.4.3 Ex post Guarantees . . . . . . . . . . . . . . . . . . . . . . . . 83

3.5 Fixing Random Serial Dictatorship . . . . . . . . . . . . . . . . . . . 85

3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

A Missing Proofs From Chapter 2 89

A.1 Auxiliary Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

A.2 Proof of Theorem 2.4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . 90

A.2.1 Stationary Distributions: Existence and Uniqueness . . . . . . 90

A.2.2 Upper bounding the Mixing Times . . . . . . . . . . . . . . . 91

x

Page 11: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

A.2.3 Mixing time of the Greedy Algorithm . . . . . . . . . . . . . . 91

A.2.4 Mixing time of the Patient Algorithm . . . . . . . . . . . . . . 93

A.3 Proofs from Section 2.5 . . . . . . . . . . . . . . . . . . . . . . . . . . 96

A.3.1 Proof of Lemma 2.5.4 . . . . . . . . . . . . . . . . . . . . . . . 96

A.3.2 Proof of Lemma 2.5.7 . . . . . . . . . . . . . . . . . . . . . . . 97

A.3.3 Proof of Lemma 2.5.8 . . . . . . . . . . . . . . . . . . . . . . . 97

A.3.4 Proof of Proposition 2.5.9 . . . . . . . . . . . . . . . . . . . . 98

A.3.5 Proof of Lemma 2.5.10 . . . . . . . . . . . . . . . . . . . . . . 100

A.4 Proofs from Section 2.6 . . . . . . . . . . . . . . . . . . . . . . . . . . 102

A.4.1 Proof of Lemma 2.6.3 . . . . . . . . . . . . . . . . . . . . . . . 102

A.5 Proofs from Section 2.7 . . . . . . . . . . . . . . . . . . . . . . . . . . 103

A.5.1 Proof of Lemma 2.7.4 . . . . . . . . . . . . . . . . . . . . . . . 103

A.6 Small Market Simulations . . . . . . . . . . . . . . . . . . . . . . . . 107

B Missing Proofs From Chapter 3 110

B.1 Implementation: A Random Mechanism . . . . . . . . . . . . . . . . 110

B.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

B.1.2 Operation X . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

B.1.3 The Implementation Mechanism . . . . . . . . . . . . . . . . . 115

B.1.4 Approximate Satisfaction of Soft Constraints . . . . . . . . . . 120

B.2 Average Performance of the Matching Algorithm . . . . . . . . . . . 124

B.3 Chernoff Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Bibliography 128

xi

Page 12: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

List of Figures

2.1 Patient algorithm is not optimal . . . . . . . . . . . . . . . . . . . . . 23

2.2 Optimal trade frequency as a function of discount rate . . . . . . . . 28

2.3 Greedy algorithm Markov Chain . . . . . . . . . . . . . . . . . . . . . 42

2.4 Patient algorithm Markov Chain . . . . . . . . . . . . . . . . . . . . . 46

3.1 Assignment problem framework . . . . . . . . . . . . . . . . . . . . . 69

3.2 Capacity blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3.3 A hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

3.4 Failure of bihierarchy assumption . . . . . . . . . . . . . . . . . . . . 71

3.5 An illustration of the deepest level condition . . . . . . . . . . . . . . 74

3.6 Implementation mechanism . . . . . . . . . . . . . . . . . . . . . . . 75

3.7 Local structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

3.8 Depth k condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

3.9 School choice application . . . . . . . . . . . . . . . . . . . . . . . . . 82

A.1 A Markov Chain to study the mixing time . . . . . . . . . . . . . . . 95

A.2 Small market simulations . . . . . . . . . . . . . . . . . . . . . . . . . 108

B.1 A floating cycle of length 6 . . . . . . . . . . . . . . . . . . . . . . . . 112

B.2 A floating path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

B.3 Average performance simulations I . . . . . . . . . . . . . . . . . . . 125

B.4 Average performance simulations II . . . . . . . . . . . . . . . . . . . 126

xii

Page 13: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

Chapter 1

Introduction

Economics is the science of the allocation of scarce resources. Consequently, who gets

what is one of the most fundamental questions of economics. In some marketplaces,

such as the New York Stock Exchange, prices determine who gets what. In such

markets, if you can afford something, you can have it. In some other marketplaces,

such as the allocation of students to public schools, prices play very little or no role.

Matching markets, to start with one definition, are markets in which prices are

not the only facto determining who gets what. In some matching markets, such

as the allocation of organs, monetary transfers are fully precluded. In some other

matching markets, such as the labor market or the college admissions market, prices

play some role but they are not the only facto determining the allocation outcome.

An undergraduate education at Stanford University, for instance, is expensive, but

there are many more people who are willing to pay Stanford’s tuition fee than there

are people who gain admission.

Matching markets surround us: the kidney exchange market, the National Resi-

dency Matching Program (NRMP), the school choice systems, many offline and online

labor markets such as Upwork and Uber, the allocation of courses to students in busi-

ness schools, foster care systems, and the assignment of cadets to military bases are

all examples of markets in which prices do not do all the work.

1

Page 14: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 1. INTRODUCTION 2

Economists (and computer scientists) have widely studied matching markets dur-

ing the past half century.1 Perhaps surprisingly, and despite the fact that hundreds of

papers have studied matching markets, many important aspects of matching markets

are still undertheorized. There are, in my opinion, at least two reasons for this. The

first reason, as simple as it may seem, is that we did not have enough time to study

them. Many matching markets have emerged in recent years; some of them were

formed over the Internet (e.g., online labor markets or Airbnb) and some others were

formed based on recent scientific breakthroughs (e.g., organ transplant technology).

It is not too unrealistic to claim that the arrival rate of new applications motivating

research problems is higher than the arrival rate of researchers who are working to

solve them. The second reason why some aspects of matching markets are underthe-

orized is that the mathematical tools and techniques of the field have not kept up the

pace with the computational complexities of emerging marketplaces.

In my dissertation, I envision Algorithmic Market Design as a conceptual and

technical paradigm that – by exploiting tools from algorithm design and other areas

of theoretical computer science – aims to fill the gap between the theory of matching

market design and the practice of the emerging, complex market design problems,

especially in dynamic and networked environments.

I want to emphasize that in my opinion, the abstractions in simple theoretical

models can be highly valuable because they focus attention on specific aspects of the

dynamics that determine outcomes in a market. Nevertheless, if the requirement of

analytical tractability limits our tools in such a way that essential components or

the interplays between them are overlooked, then we may not achieve the goal of the

modeling exercise. As I will show in this dissertation, dynamical aspects and complex

structures can be crucial parts of optimal designs; therefore the models that abstract

from them for the sake of simplicity can be misleading.

This dissertation contains two essays on matching market design. The first essay

is concerned with an allocation problem in a dynamic, networked environment. The

complexity of the model arises from the fact that the state space of the problem is

the set of all possible networks that stochastically evolve over time. The second essay

1For a discussion of those studies, see Chapter 2.

Page 15: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 1. INTRODUCTION 3

concerns market-making in a static resource allocation problem with complex quotas.

The complexity of the problem arises from the structure of the quotas that must be

satisfied in the allocation.

The first essay2 is concerned with the “option value” of waiting in dynamic, net-

worked markets. In dynamic matching markets (such as the kidney exchange market,

or Uber, or dating platforms), in contrast to static ones, those agents that are not

matched may stay in the market to be matched later. Thus, a matching policy will

not only affect who gets matched today, but will also affect what the composition of

options will be tomorrow. Consequently, designing matching algorithms for such en-

vironments is a dynamic decision problem in which waiting can be valuable. Waiting

expands the planner’s information through at least two different channels. First, it

resolves the uncertainty about the future matching opportunities. Second, waiting

increases the planner’s information about which agents have more urgent needs than

others. On the other hand, waiting can be costly. The central questions of the paper

are: when should the planner wait and how long should the planner wait for?

Because we explicitly model the underlying ‘exchange possibilities network’, the

state space of the planner’s problem is the set of all possible networks, which is com-

putationally complex and not soluble via standard dynamic programming techniques.

This is exactly why we exploit tools and techniques from algorithm design and stochas-

tic processes to analyze this problem. We design some heuristic matching algorithms

and bound their performance. Then, by comparing those bounds to the bounds that

we get for the optimal solutions, we identify key features of matching algorithms in

dynamic environments. In addition, we design an incentive-compatible mechanism

to extract valuable information truthfully and study some interesting comparative

statics results. This is done without explicitly solving the underlying optimization

problems of the model.

The second essay3 of this dissertation is concerned with the allocation of indivisible

goods when cash transfers are prohibited and the final allocation is required to satisfy

2The first essay is based on the paper, Dynamic Matching Market Design, which is a joint workwith Shengwu Li and Shayan Oveis Gharan [6]

3The second essay is based on the paper, Approximate Random Allocation Mechanisms, which isa joint work with Afshin Nikzad [7].

Page 16: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 1. INTRODUCTION 4

multiple quotas. A popular way to solve this problem is the ‘expected assignment’

method. In this mechanism, we first identify a fair and efficient expected assignment,

and then implement it by randomizing over feasible integer allocations. Unfortunately,

the constraint structures for which this is possible are sharply limited. The key

contribution of this paper is to show that by reconceptualizing some constraints from

‘hard’ objectives to ‘goals’, one can accommodate many more constraints into this

allocation problem. The key technical novelty of this result is in designing a unique

matching algorithm that allocates objects to agents in such a way that constraints

are exactly satisfied and goals are satisfied, at least approximately.

Both essays of the thesis are concerned with resource allocation in markets where

prices play very little or no role; where finding the “optimal” solution is not feasible.

In the first essay, finding the optimum is computationally complex. In the second

essay, the optimum satisfaction of constraints is theoretically impossible. To over-

come this issue in both cases, we aim to find “good enough”, rather than optimal,

solutions. In the first essay, by simplifying the design space, we show that a simple

matching algorithm that ignores the network complexity, but chooses the optimal

level of market thickness performs very close to the optimum solution. In the second

essay, we show that by satisfying some constraints in an approximate sense - a “good

enough” solution - one can overcome an impossibility result.

I define Algorithmic Market Design as a subfield of market design, which deals

with market design problems in which finding the optimum solution is either com-

putationally complex or theoretically impossible. In such cases, the approach of an

algorithmic market designer – who knows that “the best” is the greatest enemy of

“the good” – is to design a good enough allocation policy, rather than a perfect one.

It is often the case that an attempt to find an allocation policy that is approximately

optimal will improve our understanding of the key features of the (unachievable) opti-

mum. Furthermore, since we can hardly identify the functional form of the objective

function in most problems, an attempt to understand key features of good allocation

policies in a robust way can be even more valuable than solving for the optimum

policy for a given the functional form.

The two essays of this dissertation are two examples that exhibit the usefulness

Page 17: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 1. INTRODUCTION 5

and applications of Algorithmic Market Design in solving resource allocation problems

of computational complexity. I hope that these two examples, and the tools and

techniques described here, will pave the way for more applications of Algorithmic

Market Design in solving the emerging, complex market design problems.

Page 18: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

Chapter 2

Dynamic Matching Markets

The theory of matching has guided the designs of many markets, from school choice,

to kidney exchange, to the allocation of medical residents. In a series of classic

papers, economists have extensively characterized good matching algorithms for static

settings.1 In the canonical set-up, a social planner faces a set of agents who have

preferences over partners, contracts, or combinations thereof. The planner’s goal is

to find a matching algorithm with desirable properties (e.g. stability, efficiency, or

strategy-proofness). The algorithm is run, a match is made, and the problem ends.

Of course, many real-world matching problems are dynamic. In a dynamic match-

ing environment, agents arrive gradually over time. A social planner continually ob-

serves the agents and their preferences, and chooses how to match agents. Matched

agents leave the market. Unmatched agents either persist or depart. Thus, the plan-

ner’s decision today affects the sets of agents and options tomorrow.

Some seasonal markets, such as school choice systems and the National Residency

Matching Program, are well described as static matching problems without intertem-

poral spillovers. However, some markets are better described as dynamic matching

problems. Some examples include:

• Kidney exchange: In paired kidney exchanges, patient-donor pairs arrive over

time. They stay in the market until either they are matched to a compatible

1See [35, 30, 49, 71, 72, 42, 73, 41].

6

Page 19: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 7

pair, their condition deteriorates, or they receive a cadaveric kidney from a

waiting list.

• Markets with brokers: Some markets, such as real estate, aircraft, and ship

charters, involve intermediary brokers who receive requests to buy or sell par-

ticular items. A broker facilitates transactions between compatible buyers and

sellers, but does not hold inventory. Agents may withdraw their request if they

find an alternative transaction.

• Allocation of workers to time-sensitive tasks: Both within firms and

online labor markets, such as Uber and oDesk, planners allocate workers to

tasks that are profitable to undertake. Tasks arrive continuously, but may

expire. Workers are suited to different tasks, but may cease to be available.

In dynamic settings, the planner must decide not only which agents to match,

but also when to match them. If the planner waits, new agents may arrive, and a

more socially desirable match may be found. Waiting, in addition, will increase the

planner’s information about which agents’ needs are more urgent than others. On

the other hand, waiting might impose waiting costs on agents.

This paper deals with identifying features of optimal matching algorithms in dy-

namic environments. Our discussion for the benefits and costs of waiting suggests

that static matching models do not capture important features of dynamic match-

ing markets. Obviously, waiting may bring new agents, and thus expand the set of

feasible matchings. More generally, in a static setting, the planner chooses the best

algorithm for an exogenously given set of agents and their preferences. By contrast,

in a dynamic setting, the set of agents and trade options at each point in time depend

endogenously on the matching algorithm.

The optimal timing policy in a dynamic matching problem is not obvious a priori.

In practice, many paired kidney exchanges enact static matching algorithms (‘match-

runs’) at fixed intervals.2 Even then, matching intervals differ substantially between

exchanges: The Alliance for Paired Donation conducts a match-run once a weekday,

2In graph theory, a matching is a set of edges that have no nodes in common.

Page 20: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 8

the United Network for Organ Sharing conducts a match-run once a week3, the South

Korean kidney exchange conducts a match-run once a month, and the Dutch kidney

exchange conducts a match-run once a quarter [8]. This shows that policymakers

select different timing policies when faced with seemingly similar dynamic matching

problems. It is therefore useful to identify good timing policies, and to investigate

how policy should depend on the underlying features of the problem.

In this paper, we create and analyze a simple model of dynamic matching on

networks. Agents arrive and depart stochastically. We use binary preferences, where

a pairwise match is either acceptable or unacceptable, generated according to a known

distribution. These preferences are persistent over time, and agents may discount the

future. The set of agents (vertices) and the set of potential matches (edges) form

a random graph. Agents do not observe the set of acceptable transactions, and are

reliant upon the planner to match them to each other. We say that an agent perishes

if she leaves the market unmatched.

The planner’s problem is to design a matching algorithm; that is, at any point

in time, to select a subset of acceptable transactions and broker those trades. The

planner observes the current set of agents and acceptable transactions, but has only

probabilistic knowledge about the future. The planner may have knowledge about

which agents’ needs are urgent, in the sense that he may know which agents will

perish imminently if not matched. The goal of the planner is to maximize the sum

of the discounted utilities of all agents. In the important special case where the cost

of waiting is zero, the planner’s goal is equivalent to minimizing the proportion of

agents who perish. We call this the loss of an algorithm.

In this setting, the planner faces a trade-off between matching agents quickly and

waiting to thicken the market. If the planner matches agents frequently, then matched

agents will not have long to wait, but it will be less likely that any remaining agent

has a potential match (a thin market). On the other hand, if the planner matches

agents infrequently, then there will be more agents available, making it more likely

that any given agent has a potential match (a thick market).

When facing a trade-off between the frequency of matching and the thickness of

3See http://www.unos.org/docs/Update_MarchApril13.pdf

Page 21: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 9

the market, what is the optimal timing policy? Because we explicitly model the graph

structure of the planner’s matching problem, the state space of the resulting Markov

Decision Problem is combinatorially complex. Thus, it is not amenable to solution

via standard dynamic programming techniques. Instead, to analyze the model, we

formulate simple matching algorithms with different timing properties, and compare

them to analytic bounds on optimum performance. This will enable us to investigate

whether timing is an important feature of dynamic matching algorithms.

Our algorithms are as follows: The Greedy algorithm attempts to match each agent

upon arrival; it treats each instant as a static matching problem without regard for the

future.4 The Patient algorithm attempts to match patients on the verge of leaving the

market, potentially by matching them to a non-urgent patient. Both these algorithms

are local, in the sense that they look only at the immediate neighbors of the agent

they attempt to match rather than at the global graph structure5. We also study a

family of algorithms that speed up the trade frequency of the Patient algorithm. The

Patient(α) algorithm attempts to match urgent cases, and additionally attempts to

match each non-urgent case at some rate determined by α6.

We now state our main results. First, we analyze the performance of algorithms

with different timing properties, in the benchmark setting where the planner can

identify urgent cases. Second, we relax our informational assumption, and thereby

establish the value of short-horizon information about urgent cases. Third, we exhibit

a dynamic mechanism that truthfully elicits such information from agents.

Our first family of results concerns the problem of timing in dynamic matching

markets. First, we establish that the loss of the Patient algorithm is exponentially

(in the average degree of agents) smaller than the loss of the Greedy algorithm. This

entails that, for even moderately dense markets, the Patient algorithm substantially

outperforms the Greedy algorithm. For example, suppose on average agents perish

4Our analysis of the Greedy Algorithm encompasses waiting list policies where brokers maketransactions as soon as they are available, giving priority to agents who arrived earlier.

5Example 2.2.5 shows a case in which ‘locality’ of the Patient algorithms makes it suboptimal.6More precisely, every non-urgent agent is treated as urgent when an exogenous “exponential

clock” ticks and attempted to be matched either in that instance, or when she becomes truly urgent.

Page 22: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 10

after one year. In a market where 1000 agents arrive every year and and the proba-

bility of an acceptable transaction is 1100

, the loss of the Patient algorithm is no more

than 7% of the loss the Greedy algorithm. Thus, varying the timing properties of

simple algorithms has large effects on their performance.

Second, we find that the loss of the Patient algorithm is close to the loss of the

optimum algorithm. Recall that the Patient algorithm is local; it looks only in the

immediate neighborhood of the agents it seeks to match. By contrast, the optimum

algorithm is global and potentially very complex; the matchings it selects depend on

the entire graph structure. Thus, this result suggests that the gains from waiting to

thicken the market are large compared to the total gains from considering the global

network structure.

Third, we find that it is possible to accelerate the Patient algorithm and still

achieve exponentially small loss7. That is, we establish a bound for the tuning pa-

rameter α such that the Patient(α) algorithm has exponentially small loss. Given

the same parameters as in our previous example, under the Patient(α) algorithm, the

planner can promise to match agents in less than 4 months (in expectation) while the

loss is at most 37% of the loss of the Greedy algorithm. Thus, even moderate degrees

of waiting can substantially reduce the proportion of perished agents.

Next, we examine welfare under discounting. We show that for a range of discount

rates, the Patient algorithm delivers higher welfare than the Greedy algorithm, and

for a wider range of discount rates, there exists α such that the Patient(α) algorithm

delivers higher welfare than the Greedy algorithm. Then, in order to capture the

trade-off between the trade frequency and the thickness of the market, we solve for

the optimal waiting time as a function of the market parameters. Our comparative

statics show that the optimal waiting time is increasing in the sparsity of the graph.

Our second family of results relaxes the informational assumptions in the bench-

mark model. Suppose that the planner cannot identify urgent cases; i.e. the planner

has no individual-specific information about departure times. We find that the loss

of the Patient algorithm, which naıvely exploits urgency information, is exponentially

7As before, the exponent is in the average degree of agents.

Page 23: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 11

smaller than the loss of the optimum algorithm that lacks such information.8 This

suggests that short-horizon information about departure times is very valuable.

On the other hand, suppose that the planner has more than short-horizon infor-

mation about agent departures. The planner may be able to forecast departures long

in advance, or foresee how many new agents will arrive, or know that certain agents

are more likely than others to have new acceptable transactions. We prove that no ex-

pansion of the planner’s information allows him to achieve a better-than-exponential

loss. Taken as a pair, these results suggest that short-horizon information about de-

parture times is especially valuable to the planner. Lacking this information leads to

large losses, and having more than this information does not yield large gains.

In some settings, however, agents have short-horizon information about their de-

parture times, but the planner does not. Our final result concerns the incentive-

compatible implementation of the Patient(α) algorithm.9 Under private information,

agents may have incentives to mis-report their urgency so as to hasten their match

or to increase their probability of getting matched. We show that if agents are not

too impatient, a dynamic mechanism without transfers can elicit such information.

The mechanism treats agents who report that their need is urgent, but persist, as

though they had left the market. This means that agents trade off the possibility of a

swifter match (by declaring that they are in urgent need now) with the option value

of being matched to another agent who has an urgent need in future. We prove that

it is arbitrarily close to optimal for agents to report the truth in large markets.

The rest of the paper is organized as follows. Section 2.1 introduces our dynamic

matching market model and defines the objective. Section 2.2 presents our main con-

tributions; we recommend that readers consult this section to see a formal statement

of our results without getting into the details of the proofs. Section 2.3 analyzes two

optimal policies as benchmarks and provides analytic bounds on their performance.

8This result has some of the flavor of Bulow and Klemperer’s theorem [25] comparing simpleauctions to optimal negotiations. They show that simple auctions with N + 1 bidders raise morerevenue than optimal mechanisms with N bidders. We show that simple matching algorithms thatthicken the market by exploiting urgency information are better than optimal algorithms that donot.

9Note that the Patient(α) algorithm contains the Patient algorithm as a special case.

Page 24: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 12

Section 2.4 models our algorithms as Markov Chains and bounds the mixing times

of the chains. Section 2.5 goes through a deep analysis of the Greedy algorithm, the

Patient algorithm, and the Patient(α) algorithm and bounds their performance. Sec-

tion 2.6 takes waiting costs into account and bounds the social welfare under different

algorithms. Section 2.7 considers the case where the urgency of an agent’s needs is

private information, and exhibits a truthful direct revelation mechanism. Section 2.8

includes the concluding discussions.10

2.0.1 Related Work

There have been several studies on dynamic matching in the literatures of economics,

computer science, and operations research, that each fit a specific marketplace, such

as the real estate market, paired kidney exchange, or online advertising. To the

best of our knowledge, no previous work has offered a general framework for dynamic

matching in networked markets, and no previous work has considered stochastic agent

departures. This paper is also the first to produce analytic results on bilateral dynamic

matching that explicitly account for discounting.11

[54] and [18] study an overlapping generations model of the housing market. In

their models, agents have deterministic arrivals and departures. In addition, the

housing side of the market is infinitely durable and static, and houses do not have

preferences over agents. In the same context, [55] studies a one-sided dynamic housing

allocation problem in which houses arrive stochastically over time. His model is based

on two waiting lists and does not include a network structure. In addition, agents

remain in the waiting list until they are assigned to a house; i.e., they do not perish.

10This paper has benefited from helpful comments of many people. We thank Paul Milgromand Alvin Roth for valuable comments and suggestions. We also thank Itai Ashlagi, Peter Biro,Timothy Bresnahan, Jeremy Bulow, Gabriel Carroll, Ben Golub, Matthew Jackson, Fuhito Kojima,Scott Kominers, Soohyung Lee, Jacob Leshno, Malwina Luczak, Stephen Nei, Muriel Niederle,Afshin Nikzad, Michael Ostrovsky, Takuo Sugaya, Bob Wilson, and Alex Wolitzky for their valuablecomments, as well as several seminar participants for helpful suggestions. All errors remain our own.

11Other analytic results in bilateral dynamic matching concern statistics such as average waitingtimes, which are related to but not identical with discounted utility. See a recent study of dynamicbarter exchange markets by [9], where agents never perish and the main objective is the averagewaiting time. They show that when only pairwise exchanges are allowed, the Greedy algorithm isclose to the optimum, which is similar to Theorem 2.2.11.

Page 25: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 13

In the context of live-donor kidney exchanges, [76] studies an interesting model

of the dynamic kidney exchange in which agents have multiple types. In his model,

agents never perish and so, one insight of his model is that waiting to thicken the

market is not helpful when only bilateral exchanges are allowed. This is very different

from the insights of our paper. In the Operations Research and Computer Science

literatures, dynamic kidney matching has been extensively studied, see e.g., [79, 75,

14, 31]. Perhaps most related to our work is that of [11] who construct a discrete-time

finite-horizon model of dynamic kidney exchange. Unlike our model, agents who are

in the pool neither perish, nor bear any waiting cost, and so they do not model agents’

incentives. Their model has two types of agents, one easy to match and one hard

to match, which then creates a specific graph structure that fits well to the kidney

market.

In an independent concurrent study, inspired by online labor markets such as

oDesk, [10] model a dynamic two-sided dynamic matching market and show that

reducing search and screening costs does not necessarily increase welfare. Their main

goal is to analyze congestion in decentralized dynamic markets, as opposed to our goal

which is to study the “when to match” question from a central planning perspective.

The problem of online matching has been extensively studied in the literature

of online advertising. In this setting, advertisements are static, but queries arrive

adversarially or stochastically over time. Unlike our model, queries persist in the

market for exactly one period. [47] introduced the problem and designed a randomized

matching algorithm. Subsequently, the problem has been considered under several

arrival models with pre-specified budgets for the advertisers, [61, 37, 34, 59].

In contrast to dynamic matching, there are numerous investigations of dynamic

auctions and dynamic mechanism design. [67] generalize the VCG mechanism to

a dynamic setting. [12] construct efficient and incentive-compatible dynamic mecha-

nisms for private information settings. [65] and [36] extend Myerson’s optimal auction

result [62] to dynamic environments. We refer interested readers to [66] for a review

of the dynamic mechanism design literature.

Page 26: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 14

2.1 The Model

In this section, we provide a stochastic continuous-time model for a bilateral matching

market that runs in the interval [0, T ]. Agents arrive at the market at ratem according

to a Poisson process. Hence, in any interval [t, t+ 1], m new agents enter the market

in expectation. Throughout the paper we assume m ≥ 1. For t ≥ 0, let At be the set

of the agents in our market at time t, and let Zt := |At|. We refer to At as the pool

of the market. We start by describing the evolution of At as a function of t ∈ [0, T ].

Since we are interested in the limit behavior of At, without loss of generality, we may

assume A0 = ∅. We use Ant to denote12 the set of agents who enter the market at

time t. Note that with probability 1, |Ant | ≤ 1. Also, let |Ant0,t0+t1| denote the set of

agents who enter the market in time interval [t0, t1].

Each agent becomes critical according to an independent Poisson process with rate

λ. This implies that, if an agent a enters the market at time t0, then she becomes

critical at some time t0+X where X is an exponential random variable with parameter

λ. Any critical agent leaves the market immediately; so the last point in time that an

agent can get matched is the time that she gets critical. We say an agent a perishes

if a leaves the market unmatched.13

We assume that an agent a ∈ At leaves the market at time t, if any of the following

three events occur at time t:

• a is matched with another agent b ∈ At,

• a becomes critical and gets matched

• a becomes critical and leaves the market unmatched, i.e., a perishes.

Say a enters the market at time t0 and gets critical at time t0 + X where X

is an exponential random variable with parameter λ. By above discussion, for any

matching algorithm, a leaves the market at some time t1 where t0 ≤ t1 ≤ t0 + X

12As a notational guidance, we use subscripts to refer to a point in time or a time interval, whilesuperscripts n, c refer to new agents and critical agents, respectively.

13We intend this as a term of art. In the case of kidney exchange, perishing can be interpreted asa patient’s medical condition deteriorating in such a way as to make transplants infeasible.

Page 27: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 15

(note a may leave sooner than t0 +X if she gets matched before getting critical). The

sojourn of a is the length of the interval that a is in the pool, i.e., s(a) := t1 − t0.

We use Act to denote the set of agents that are critical at time t.14 Also, note that

for any t ≥ 0, with probability 1, |Act | ≤ 1.

For any pair of agents, the probability that a bilateral transaction between them is

acceptable is d/m, where 0 ≤ d ≤ m and these probabilities are independent. For the

sake of clarity of notation, we may use q := 1− d/m. For any t ≥ 0, let Et ⊆ At×Atbe the set of acceptable bilateral transactions between the agents in the market (the

set of edges) at time t, and let Gt = (At, Et) be the exchange possibilities graph at

time t. Note that if a, b ∈ At and a, b ∈ At′ , then (a, b) ∈ Et if and only if (a, b) ∈ Et′ ,i.e. the acceptable bilateral transactions are persistent throughout the process. For

an agent a ∈ At we use Nt(a) ⊆ At to denote the set of neighbors of a in Gt. It

follows that, if the planner does not match any agents, then for any fixed t ≥ 0,

Gt is distributed as an Erdos-Reyni graph with parameter d/m and d is the average

degree15 of agents [33].

Let A = ∪t≤TAnt , let E ⊆ A × A be the set of acceptable transactions between

agents in A, and let G = (A,E)16. Observe that any realization of the above stochas-

tic process is uniquely defined given Ant , Act for all t ≥ 0 and the set of acceptable

transactions, E. A vector (m, d, λ) represents a dynamic matching market. With-

out loss of generality, we can scale time so that λ = 1 (by normalizing m and d).

Therefore, throughout the paper, we assume λ = 1, unless otherwise specified17.

Online Matching Algorithms. A set of edges Mt ⊆ Et is a matching if no two

edges share the same endpoints. An online matching algorithm, at any time t ≥ 0,

14In our proofs, we use the fact that Act ⊆ ∪0≤τ≤tAτ . In the example of the text, we havea ∈ Act0+X . Note that even if agent a is matched before getting critical (i.e., t1 < t0 + X), we stillhave that a ∈ Act0+X . Hence, Act is not necessarily a subset of At since it may have agents who arealready matched and left the market. This generalized definition of Act is going to be helpful in ourproofs.

15In an undirected graph, degree of of a node is equal to the total number of edges connected tothat node.

16Note that E ⊇ ∪t≤TEt, and the two sets are not typically equal, since two agents may find itacceptable to transact, even though they are not in the pool at the same time because one of themwas matched earlier.

17See Proposition 2.5.12 for details of why this is without loss of generality.

Page 28: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 16

selects a (possibly empty) matching, Mt, in the current acceptable transactions graph

Gt, and the endpoints of the edges in Mt leave the market immediately. We assume

that any online matching algorithm at any time t0 only knows the current graph Gt

for t ≤ t0 and does not know anything about Gt′ for t′ > t0. In the benchmark case

that we consider, the online algorithm can depend on the set of critical agents at time

t; nonetheless, we will extend several of our theorems to the case where the online

algorithm does not have this knowledge. As will become clear, this knowledge has a

significant impact on the performance of any online algorithm.

We emphasize that the random sets At (the set of agents in the pool at time

t), Et (the set of acceptable transactions at time t), Nt(a) (the set of an agent a’s

neighbors), and the random variable Zt (pool size at time t) are all functions of the

underlying matching algorithm. We abuse the notation and do not include the name

of the algorithm when we analyze these variables.

The Goal. The goal of the planner is then to design an online matching algorithm

that maximizes the social welfare, i.e., the sum of the utility of all agents in the

market. Let ALG(T ) be the set of matched agents by time T ,

ALG(T ) := {a ∈ A : a is matched by ALG by time T}.

We may drop the T in the notation ALG(T ) if it is clear from context.

An agent receives zero utility if she leaves the market unmatched. If she is

matched, she receives a utility of 1 discounted at rate δ. More formally, if s(a) is

the sojourn of agent a, then we define the utility of agent a as follows:

u(a) :=

e−δs(a) if a is matched

0 otherwise.

We define the social welfare of an online algorithm to be the expected sum of the

Page 29: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 17

utility of all agents in the interval [0, T ], divided by a normalization factor:

W(ALG) := E

1

mT

∑a∈ALG(T )

e−δs(a)

The goal of the planner is to choose an online algorithm that maximizes the welfare

for large values of T (see Theorem 2.5.1, Theorem 2.5.2, and Theorem 2.6.1 for the

dependence of our results on T ).

It is instructive to consider the special case where δ = 0, i.e., the cost of waiting

is negligible compared to the cost of leaving the market unmatched. In this case,

the goal of the planner is to match the maximum number of agents, or equivalently

to minimize the number of perished agents. The loss of an online algorithm ALG is

defined as the ratio of the expected18 number of perished agents to the expected size

of A,

L(ALG) :=E [|A− ALG(T )− AT |]

E [|A|]=

E [|A− ALG(T )− AT |]mT

.

When we assume δ = 0, we will use the L notation for the planner’s loss function.

When we consider δ > 0, we will use the W notation for social welfare.

Each of the above optimization problems can be modeled as a Markov Decision

Problem (MDP)19 that is defined as follows. The state space is the set of pairs (H,B)

where H is any undirected graph of any size, and if the algorithm knows the set of

critical agents, B is a set of at most one vertex of H representing the corresponding

critical agent. The action space for a given state is the set of matchings on the

graph H. Under this conception, an algorithm designer wants to minimize the loss

or maximize the social welfare over a time period T .

Although this MDP has infinite number of states, with small error one can reduce

the state space to graphs of size at most O(m). Even in that case, this MDP has an

exponential number of states in m, since there are at least 2(m2 )/m! distinct graphs of

18We consider the expected value as a modeling choice. One may also be interested in objectivefunctions that depend on the variance of the performance, as well as the expected value. As willbe seen later in the paper, the performance of our algorithms are highly concentrated around theirexpected value, which guarantees that the variance is very small in most of the cases.

19We recommend [16] for background on Markov Decision Processes.

Page 30: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 18

size m20, so for even moderately large markets21, we cannot apply tools from Dynamic

Programming literature to find the optimum online matching algorithm.

Optimum Solutions. In many parts of this paper we compare the performance of

an online algorithm to the performance of an optimal omniscient algorithm. Unlike

any online algorithm, the omniscient algorithm has full information about the future,

i.e., it knows the full realization of the graph G.22 Therefore, it can return the

maximum matching in this graph as its output, and thus minimize the fraction of

perished agents. Let OMN(T ) be the set of matched agents in the maximum matching

of G. The loss function under the omnsicient algorithm at time T is

L(OMN) :=E [|A−OMN(T )− AT |]

mT

Observe that for any online algorithm, ALG, and any realization of the probability

space, we have |ALG(T )| ≤ |OMN(T )|.23

It is also instructive to study the optimum online algorithm, an online algorithm

with unlimited computational power. By definition, an optimum online algorithm can

solve the exponential-sized state space Markov Decision Problem and return the best

policy function from states to matchings. We first consider OPTc, the algorithm that

knows the set of critical agents at time t (with associated loss L(OPTc)). We then

relax this assumption and consider OPT, the algorithm that does not know these sets

(with associated loss L(OPT)).

Let ALGc be the loss under any online algorithm that knows the set of critical

20This lower bound is derived as follows: When there are m agents, there are(m2

)possible edges,

each of which may be present or absent. Some of these graphs may have the same structure butdifferent agent indices. A conservative lower bound is to divide by all possible re-labellings of theagents (m!).

21For instance, for m = 30, there are more than 1098 states in the approximated MDP.22In computer science, these are equivalently called offline algorithms.23This follows from a straightforward revealed-preference argument: For any realization, the op-

timum offline policy has the information to replicate any given online policy, so it must do weaklybetter.

Page 31: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 19

agents at time t. It follows that

L(ALGc) ≥ L(OPTc) ≥ L(OMN).

Similarly, let ALG be the loss under any online algorithm that does not know the

set of critical agents at time t. It follows that24

L(ALG) ≥ L(OPT) ≥ L(OPTc) ≥ L(OMN).

2.2 Our Contributions

In this section, we present our main contributions and provide intuitions for them.

We first introduce two simple matching algorithms, and then two classes of algorithms

that vary the waiting time. The first algorithm is the Greedy algorithm, which mimics

‘match-as-you-go’ algorithms used in many real marketplaces. It delivers maximal

matchings at any point in time, without regard for the future.

Definition 2.2.1 (Greedy Algorithm:). If any new agent a enters the market at

time t, then match her with an arbitrary agent in Nt(a) whenever Nt(a) 6= ∅. We

use L(Greedy) and W(Greedy) to denote the loss and the social welfare under this

algorithm, respectively.

Note that since |Ant | ≤ 1 almost surely, we do not need to consider the case where

more than one agent enters the market at any point in time. Observe that the graph

Gt in the Greedy algorithm is (almost) always an empty graph. Hence, the Greedy

algorithm cannot use any information about the set of critical agents.

The second algorithm is a simple online algorithm that preserves two essential

characteristics of OPTc when δ = 0 (recall that OPTc is the optimum online algorithm

with knowledge of the set of critical agents):

i) A pair of agents a, b get matched in OPTc only if one of them is critical. This

24Note that |ALG| and |OPT| are generally incomparable, and depending on the realization of Gwe may even have |ALG| > |OPT|.

Page 32: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 20

property is called the rule of deferral match: Since δ = 0, if a, b can be matched

and neither of them is critical we can wait and match them later.

ii) If an agent a is critical at time t and Nt(a) 6= ∅ then OPTc matches a. This

property is a corollary of the following simple fact: matching a critical agent does

not increase the number of perished agents in any online algorithm.

Our second algorithm is designed to be the simplest possible online algorithm that

satisfies both of the above properties.

Definition 2.2.2 (Patient Algorithm). If an agent a becomes critical at time t, then

match her uniformly at random with an agent in Nt(a) whenever Nt(a) 6= ∅. We

use L(Patient) and W(Patient) to denote the loss and the social welfare under this

algorithm, respectively.

Observe that unlike the Greedy algorithm, here we need access to the set of critical

agents at time t. We do not intend the timing assumptions about critical agents to

be interpreted literally. An agent’s point of perishing represents the point at which

it ceases to be socially valuable to match that agent. Letting the planner observe the

set of critical agents is a modeling convention that represents high-accuracy short-

horizon information about agents’ departures. An example of such information is the

Model for End-Stage Liver Disease (MELD) score, which accurately predicts 3-month

mortality among patients with chronic liver disease. The US Organ Procurement and

Transplantation Network gives priority to individuals with a higher MELD score,

following a broad medical consensus that liver donor allocation should be based on

urgency of need and not substantially on waiting time. [78] Note that the Patient

algorithm exploits only short-horizon information about urgent cases, as compared

to the Omniscient algorithm which has full information of the future. We discuss

implications of relaxing our informational assumptions in Subsection 2.2.3.

The third algorithm interpolates between the Greedy and the Patient algorithms.

The idea of this algorithm is to assign independent exponential clocks with rates 1/α

where α ∈ [0,∞) to each agent a. If agent a’s exponential clock ticks, the market-

maker attempts to match her. If she has no neighbors, then she remains in the pool

until she gets critical, where the market-maker attempts to match her again.

Page 33: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 21

A technical difficulty with the above matching algorithm is that it is not memo-

ryless; that is because when an agent gets critical and has no neighbors, she remains

in the pool. Therefore, instead of the above algorithm, we study a slightly different

matching algorithm (with a worse loss).

Definition 2.2.3 (The Patient(α) algorithm). Assign independent exponential clocks

with rate 1/α where α ∈ [0,∞) to each agent a. If agent a’s exponential clock ticks or

if an agent a becomes critical at time t, match her uniformly at random with an agent

in Nt(a) whenever Nt(a) 6= ∅. In both cases, if Nt(a) = ∅, treat that agent as if she

has perished; i.e., never match her again. We use L(Patient(α)) and W(Patient(α))

to denote the loss and the social welfare under this algorithm, respectively.

It is easy to see that an upper bound on the loss of the Patient(α) algorithm is an

upper bound on the loss of our desired interpolating algorithm. Under this algorithm

each agent’s exponential clock ticks at rate 1α

, so we search their neighbors for a

potential match at rate α := 1 + 1α

. We refer to α as the trade frequency. Note that

the trade frequency is a decreasing function of α25.

In the rest of this section we describe our contributions. To avoid cumbersome

notation, we state our results in the large-market long-horizon regime (i.e., as m→∞and T → ∞). In later sections, we explicitly study the dependency on (m,T ). For

example, we show that the transition of the market to the steady state takes no more

than O(log(m)) time units. In other words, many of the large time effects that we

predict in our model can be seen in poly-logarithmic time in the size of the market.

In addition, we only present an overview of the proofs in this section; the rest of the

paper includes detailed analysis of the model and full proofs.

2.2.1 Timing in Matching Markets

Does timing substantially affect the performance of dynamic matching algorithms?

Our first result establishes that varying the timing properties of simple algorithms

has large effects on their performance. In particular, we show that the number of

25The use of exponential clocks is a modeling convention that enables us to reduce waiting timeswhile retaining analytically tractable Markov properties.

Page 34: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 22

perished agents under the Patient algorithm is exponentially (in d) smaller than the

number of perished agents under the Greedy algorithm.

Theorem 2.2.4. For d ≥ 2, as T,m→∞,

L(Greedy) ≥ 1

2d+ 1

L(Patient) ≤ 1

2· e−d/2

As a result,

L(Patient(α)) ≤ (d+ 1) · e−d/2 · L(Greedy)

This theorem shows that the Patient algorithm strongly outperforms the Greedy

algorithm. The intuition behind this finding is that, under the Greedy algorithm,

there is no acceptable transactions among the set of agents in the pool and so all

critical agents imminently perish. On the contrary, the pool is thicker since under

the Patient algorithm is always an Erdos-Reyni random graph (see Proposition 2.4.1

for a proof of this claim), and such market thickness helps the planner to react to

critical cases.

The next question is, are the gains from market thickness large compared to the

total gains from optimization and choosing the right agents to match? First, in the

following example, we show that the Patient algorithm is not optimal because it

ignores the global graph structure.

Example 2.2.5. Let Gt be the graph shown in Figure 2.1, and let a2 ∈ Act , i.e., a2

is critical at time t. Observe that it is strictly better to match a2 to a1 as opposed to

a3. Nevertheless, since the Patient algorithm makes decisions that depend only on the

immediate neighbors of the agent it is trying to match, it cannot differentiate between

a1 and a3 and will choose either of them with equal probability.

To quantify gains of optimization, we compare the Patient algorithm to the om-

niscient algorithm, which obviously bounds the performance of the optimum online

algorithm.

The next theorem shows that no algorithm achieves better than exponentially

small loss. Furthermore, the gains from the right timing decision (moving from the

Page 35: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 23

a1 a2 a3 a4

Figure 2.1: If a2 gets critical in the above graph, it is strictly better to match him toa1 as opposed to a3. The Patient algorithm, however, chooses either of a1 or a3 withequal probability.

Greedy algorithm to the Patient algorithm) are much larger than the remaining gains

from optimization (moving from the Patient algorithm to the optimum algorithm).

Theorem 2.2.6. For d ≥ 2, as T,m→∞,

e−d

d+ 1≤ L(OMN) ≤ L(Patient) ≤ 1

2· e−d/2.

This constitutes an answer to the “when to match versus whom to match” ques-

tion. Recall that OMN has perfect foresight, and bounds the performance of any

algorithm, including OPTc, the globally optimal solution. Thus, Theorem 2.2.6 im-

plies that the marginal effect of a globally optimal solution is small relative to the

effect of making the right timing decision. In many settings, optimal solutions may

be computationally demanding and difficult to implement. Thus, this result suggests

that it will often be more worthwhile for policymakers to find ways to thicken the

market, rather than to seek potentially complicated optimal policies.26

A planner, however, may not be willing to implement the Patient algorithm for

various reasons. First, the cost of waiting is usually not zero; in other words, agents

prefer to be matched earlier (we discuss this cost in detail in Subsection 2.2.2). Second,

the planner may be in competition with other exchange platforms and may be able to

attract more agents by advertising reasonably short waiting times. Hence, we study

26It is worth emphasizing that this result (as well as Theorem 2.2.11) proves that local algorithmsare close-to-optimal and since in our model agents are ex ante homogeneous, this shows that “whoto match” is not as important as “when to match”. In settings where agents have multiple types,however, the decision of “who to match” can be an important one even when it is local. For instance,suppose a critical agent has two neighbors, one who is hard-to-match and one who is easy-to-match.Then, everything else constant, the optimal policy should match the critical agent to the hard-to-match neighbor. This result has been formally proved in a new working paper by Akbarpour, Nikzadand Roth, in which they show that breaking the ties in favor of hard-to-match agents reduces theloss.

Page 36: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 24

the performance of the Patient(α) algorithm, which introduces a way to speed up

the Patient algorithm. The next result shows that when α is not ‘too small’ (i.e.,

the exponential clocks of the agents do not tick at a very fast rate), then Patient(α)

algorithm still (strongly) outperforms the Greedy algorithm. In other words, even

waiting for a moderate time can substantially reduce perishings.

Theorem 2.2.7. Let α := 1/α + 1. For d ≥ 2, as T,m→∞,

L(Patient(α)) ≤ (d+ 1) · e−d/2α · L(Greedy)

A numerical example clarifies the significance of this result. Consider the case of

a kidney exchange market, where 1000 new patients arrive to the market every year,

their average sojourn is 1 year and they can exchange kidneys with a random patient

with probability 1100

; that is, d = 10. The above result for the Patient(α) algorithm

suggests that the market-maker can promise to match agents in less than 4 months

(in expectation) while the fraction of perished agents is at most 37% of the Greedy

algorithm. Note that if we employ the Patient algorithm, the fraction of perished

agents will be at most 7% of the Greedy algorithm.

We now present a proof sketch for all of the theorems presented in this section.

The details of the proofs are discuss in the rest of the paper.

Proof Overview. [Theorem 2.2.4, Theorem 2.2.6, Theorem 2.2.7, and ??] We first

sketch the proof of Theorem 2.2.4. We show that for large enough values of T and m,

(i) L(Patient) ≤ e−d/2/2 and (ii) L(OPT) ≥ 1/(2d+1). By the fact that L(Greedy) ≥L(OPT) (since the Greedy algorithm does not use information about critical agents),

Theorem 2.2.4 follows immediately. The key idea in proving both parts is to carefully

study the distribution of the pool size, Zt, under any of these algorithms.

For (i), we show that pool size under the Patient algorithm is a Markov chain, it

has a unique stationary distribution and it mixes rapidly to the stationary distribution

(see Theorem 2.4.2). This implies that for t larger than mixing time, Zt is essentially

distributed as the stationary distribution of the Markov Chain. We show that under

the stationary distribution, with high probability, Zt ∈ [m/2,m]. Therefore, any

Page 37: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 25

critical agent has no acceptable transactions with probability at most (1−d/m)m/2 ≤e−d/2. This proves (i) (see Subsection 2.5.2 for the exact analysis of the Patient

Markov chain).

For (ii), note that any algorithm which lacks the information of critical agents, the

expected perishing rate is equal to the pool size because any critical agent perishes

with probability one, and agents get critical with rate 1. Therefore, if the expected

pool size is large, the perishing rate is very high. On the other hand, if the expected

pool size is very low, the perishing rate is again very high because there will be many

agents with no acceptable transactions upon their arrival or during their sojourn. We

analyze the trade-off between the above two extreme cases and show that even if the

pool size is optimally chosen, the loss cannot be less than what we claimed in (ii)

(see Theorem 2.3.1).

We prove Theorem 2.2.6 by showing that L(OMN) ≥ e−d

d+1. To do so, we provide

a lower bound for the fraction of agents who arrive the market at some point in

time and have no acceptable transactions during their sojourn (see Theorem 2.3.2).

Note that even with full knowledge about the realization of the process, those agents

cannot be matched.

We now sketch the proof of Theorem 2.2.7. By the additivity of the Poisson

process, the loss of Patient(α) algorithm in a (m, d, 1) matching market is equal to

the loss of the Patient algorithm in a (m, d, α) matching market, where α = 1/α+ 1.

The next step is to show that a matching market (m, d, α) is equivalent to a

matching market (m/α, d/α, 1) in the sense that any quantity in these two markets

is the same up to a time scale (see Definition 2.5.11). By this fact, the loss of the

Patient algorithm on a (m, d, α) matching market at time T is equal to the loss of

Patient algorithm on a (m/α, d/α, 1) market at time αT . But, we have already upper

bounded the latter in Theorem 2.2.4.

Remark 2.2.8. One may interpret the above result as demonstrating the value of

information (i.e. knowledge of the set of critical agents) as opposed to the value of

waiting. This is not our interpretation, since the Greedy algorithm cannot improve

Page 38: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 26

its performance even if it has knowledge of the set of critical agents. The graph

Gt is almost surely an empty graph, so there is no possibility of matching a critical

agent in the Greedy algorithm. The Patient algorithm strongly outperforms the Greedy

algorithm because it waits long enough to have a large pool of agents with many

acceptable bilateral transactions.

2.2.2 Welfare Under Discounting and Optimal Waiting Time

In this part we explictly account for the cost of waiting and study online algorithms

that optimize social welfare. It is clear that if agents are very impatient (i.e., they have

very high waiting costs), it is better to implement the Greedy algorithm. On the other

hand, if agents are very patient (i.e., they have very low waiting costs), it is better to

implement the Patient algorithm. Therefore, a natural welfare economics question is:

For which values of δ is the Patient algorithm (or the Patient(α) algorithm) socially

preferred to the Greedy algorithm?

Our next result studies social welfare under the Patient, Patient(α) and Greedy

algorithms. We show that for small enough δ, there exists a value of α such that the

Patient(α) algorithm is socially preferable to the Greedy algorithm.

Theorem 2.2.9. For any 0 ≤ δ ≤ 18 log(d)

, there exists an α ≥ 0 such that as m,T →∞,

W(Patient(α)) ≥W(Greedy).

In particular, for δ ≤ 1/2d and d ≥ 5, we have

W(Patient) ≥W(Greedy).

A numerical example illustrates these magnitudes. Consider a barter market,

where 100 new traders arrive at the market every week, their average sojourn is one

week, and there is a satisfactory trade between two random agents in the market with

probability 0.05; that is, d = 5. Then our welfare analysis implies that if the cost

associated with waiting for one week is less than 10% of the surplus from a typical

trade, then the Patient(α) algorithm, for a tuned value of α, is socially preferred to

Page 39: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 27

the Greedy algorithm.

When agents discount the future, how should the planner trade off the frequency

of transactions and the thickness of the market? To answer this, we characterize

the optimal trade frequency under the Patient(α) algorithm. Recall that under this

algorithm each agent’s exponential clock ticks at rate 1α

, so we search their neighbors

for a potential match at rate α := 1+ 1α

. The optimal α, the trade frequency, is stated

in the following theorem.

Theorem 2.2.10. Given d, δ, as m,T → ∞, there exists d ∈ [d/2, d] as a function

of m, d27 such that the Patient( 1max{α,1}−1

) algorithm where α is the solution of

δ −(δ + d+

α

)e−

dα = 0, (2.2.1)

attains the largest welfare among all Patient(α) algorithms. In particular, if δ < d/4,

then d/ log(d/δ) ≤ α ≤ d/ log(2d/δ).

In addition, if δ < (d− 1)/2, then α∗ is a non-decreasing function of δ and d.

Figure 2.2 illustrates max{α, 1} as a function of δ. As one would expect, the

optimal trade frequency is increasing in δ. Moreover, Theorem 2.2.10 indicates that

the optimal trade frequency is increasing in d. In Subsection 2.2.1, we showed that

L(Patient) is exponentially smaller in d than L(Greedy). This may suggest that wait-

ing is mostly valuable in dense graphs. By contrast, Theorem 2.2.10 shows that one

should wait longer as the graph becomes sparser. Intuitively, an algorithm performs

well if, whenever it searches neighbors of a critical agent for a potential match, it can

find a match with very high probability. This probability is a function of both the

pool size and d. When d is smaller, the pool size should be larger (i.e. the trade

frequency should be lower) so that the probability of finding a match remains high.

For larger values of d, on the other hand, a smaller pool size suffices.

Finally, we note that Patient (i.e. α = ∞) is the optimal policy for a range of

parameter values. To see why, suppose agents discount their welfare, but never perish.

The planner would still wish to match them at some positive rate. For a range of

27More precisely, d := xd/m where x is the solution of equation (A.3.7).

Page 40: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 28

0 0.05 0.1 0.15 0.2 0.25

0.8

1

1.2

1.4

1.6

1.8

δ

α∗

d=15

d=12.5

d=10

d=7.5

d=5

Figure 2.2: Optimal trade frequency (max{α, 1}) as a function of the discount rate fordifferent values of d. Our analysis shows that the optimal trade frequency is smallerin sparser graphs.

parameters, this positive rate is less than 1. In a world where agents perish, α is

bounded below by 1, and we have a corner solution for such parameter values.

Proof Overview. [Theorem 2.2.9 and Theorem 2.2.10] We first show that for large val-

ues of m and T , (i) W(Greedy) ≤ 1− 12d+1

, and (ii) W(Patient(α)) ' 22−e−d/2α+δ/α

(1−e−d/2α), where α = 1 + 1/α. The proof of (i) is very simple: 1/(2d + 1) fraction of

agents perish under the Greedy algorithm. Therefore, even if all of the matched

agents receive a utility of 1, the social welfare is no more than 1− 1/(2d+ 1).

The proof of (ii) is more involved. The idea is to define a random variable Xt

representing the potential utility of the agents in the pool at time t, i.e., if all agents

who are in the pool at time t get matched immediately, then they receive a total

utility of Xt. We show that Xt can be estimated with a small error by studying the

evolution of the system through a differential equation. Given Xt and the pool size at

time t, Zt, the expected utility of an agent that is matched at time t is exactly Xt/Zt.

Using our concentration results on Zt, we can then compute the expected utility of

Page 41: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 29

the agents that are matched in any interval [t, t + dt]. Integrating over all t ∈ [0, T ]

proves the claim. (See Section 2.6 for an exact analysis of welfare under the Patient

algorithm)

To prove Theorem 2.2.10, we characterize the unique global maximum of W(Patient(α)).

The key point is that the optimum value of α (= 1 + 1/α) is less than 1 for a range

of parameters. However, since α ≥ 0, we must have that α ∈ [1,∞). Therefore,

whenever the solution of equation (2.2.1) is less than 1, the optimal α is 1 and we

have a corner solution; i.e. setting α =∞ (running the Patient algorithm) is optimal.

2.2.3 Information and Incentive-Compatibility

Up to this point, we have assumed that the planner knows the set of critical agents;

i.e. he has accurate short-horizon information about agent departures. We now relax

this assumption in both directions.

Suppose that the planner does not know the set of critical agents. That is, the

planner’s policy may depend on the graph Gt, but not the set of critical agents Act .

Recall that OPT is the optimum algorithm subject to these constraints.

Theorem 2.2.11. For d ≥ 2, as T,m→∞,

1

2d+ 1≤ L(OPT) ≤ L(Greedy) ≤ log(2)

d,

Theorem 2.2.11 shows that the loss of OPT and Greedy are relatively close. This

indicates that waiting and criticality information are complements, in that waiting

to thicken the market is substantially valuable only when the planner can identify

urgent cases. Observe that OPT could in principle wait to thicken the market, but

the gains from doing so (compared to running the Greedy algorithm) are not large.

Under these new information assumptions, we once more find that local algorithms

can perform close to computationally intensive global optima.

Page 42: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 30

Moreover, Theorem 2.2.4 and Theorem 2.2.11 together show that criticality infor-

mation is valuable, since the loss of Patient, which naıvely exploits criticality infor-

mation, is exponentially smaller than the loss of OPT, the optimal algorithm without

this information.

What if the planner knows more than just the set of critical agents? For instance,

the planner may have long-horizon forecasts of agent departure times, or the planner

may know that certain agents are more likely to have acceptable transactions in

future than other agents28. However, Theorem 2.2.6 shows that no expansion of

the planner’s information set yields a better-than-exponential loss. This is because

L(OMN) is the loss under a maximal expansion of the planner’s information; the case

where the planner has perfect foreknowledge of the future.

Taken together, these results suggest that criticality information is particularly

valuable. This information is necessary to achieve exponentially small loss, and no

expansion of information enables an algorithm to achieve better-than-exponential

loss.

However, in many settings, it is plausible that agents have privileged insight into

their own departure timings. In such cases, agents may have incentives to misreport

whether they are critical, in order to increase their chance of getting matched or to

decrease their waiting time. We now exhibit a truthful mechanism without transfers

that elicits such information from agents, and implements the Patient(α) algorithm.

We assume that agents are fully rational and know the underlying parameters,

but they do not observe the actual realization of the stochastic process. That is,

agents observe whether they are critical, but do not observe Gt, while the planner

observes Gt but does not observe which agents are critical. Consequently, agents’

strategies are independent of the realized sample path. Our results are sensitive to

this assumption29; for instance, if the agent knew that she had a neighbor, or knew

28In our model, the number of acceptable transactions that a given agent will have with the nextN agents to arrive is Bernoulli distributed. If the planner knows beforehand whether a given agent’srealization is above or below the 50th percentile of this distribution, it is as though agents havedifferent ‘types’.

29This assumption is plausible in many settings; generally, centralized brokers know more aboutthe current state of the market than individual traders. Indeed, frequently agents approach central-ized brokers because they do not know who is available to trade with them.

Page 43: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 31

that the pool at that moment was very large, she would have an incentive under our

mechanism to falsely report that she was critical.

The truthful mechanism, Patient-Mechanism(α), is described below.

Definition 2.2.12 (Patient-Mechanism(α)). Assign independent exponential clocks

with rate 1/α to each agent a, where α ∈ [0,∞). Ask agents to report when they

get critical. If agent’s exponential clock ticks or if she reports becoming critical, the

market-maker attempts to match her to a random neighbor. If the agent has no

neighbors, the market-maker treats her as if she has perished, i.e., she will never be

matched again.

Each agent a selects a mixed strategy by choosing a function ca(·); at the interval

[t, t + dt] after her arrival, if she is not yet critical, she reports being critical with

rate ca(t)dt, and when she truly becomes critical she reports that immediately. Our

main result in this section asserts that if agents are not too impatient, then the

Patient-Mechanism(α) is incentive-compatible in the sense that the truthful strategy

profile is a strong ε-Nash equilibrium.30

Theorem 2.2.13. Suppose that the market is in the stationary distribution and31

d = polylog(m). Let α = 1/α + 1 and β = α(1 − d/m)m/α. Then, for 0 ≤ δ ≤ β,

ca(t) = 0 for all a, t (i.e., truthful strategy profile) is a strong ε-Nash equilibrium for

Patient-Mechanism(α), where ε→ 0 as m→∞.

If d ≥ 2 and 0 ≤ δ ≤ e−d/2, the truthful strategy profile is a strong ε-Nash

equilibrium for Patient-Mechanism(∞), where ε→ 0 as m→∞.

Proof Overview. There is a hidden obstacle in proving that truthful reporting is

incentive-compatible: Even if one assumes that the market is in a stationary distri-

bution at the point an agent enters, the agent’s beliefs about pool size may change

as time passes. In particular, an agent makes inferences about the current distribu-

tion of pool size, conditional on not having been matched yet, and this conditional

30Any strong ε-Nash equilibrium is an ε-Nash equilibrium. For a definition of strong ε-Nashequilibrium, see Definition 2.7.1.

31polylog(m) denotes any polynomial function of log(m). In particular, d = polylog(m) if d is aconstant independent of m.

Page 44: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 32

distribution is different from the stationary distribution. This makes it difficult to

compute the payoffs from deviations from truthful reporting. We tackle this problem

by using the concentration bounds from Proposition 2.5.9, and focusing on strong

ε-Nash equilibrium, which allows small deviations from full optimality.

The intuition behind this proof is that an agent can be matched in one of two

ways under Patient-Mechanism(∞): Either she becomes critical, and has a neighbor,

or one of her neighbors becomes critical, and is matched to her. By symmetry, the

chance of either happening is the same, because with probability 1, every matched

pair consists of one critical agent and one non-critical agent. When an agent declares

that she is critical, she is taking her chance that she has a neighbor in the pool right

now. By contrast, if she waits, there is some probability that another agent will

become critical and be matched to her. Consequently, for small δ, agents will opt to

wait.

2.2.4 Technical Contributions

As alluded to above, most of our results follow from concentration results on the

distribution of the pool size for each of the online algorithms that are stated in

Proposition 2.5.5 and Proposition 2.5.9. In this last part we describe ideas behind

these crucial results.

For analyzing many classes of stochastic processes one needs to prove concentra-

tion bounds on functions defined on the underlying process by means of central limit

theorems, Chernoff bounds or Azuma-Hoeffding bounds. In our case many of these

tools fail. This is because we are interested in proving that for any large time t, a

given function is concentrated in an interval whose size depend only on d,m and not

t. Since t can be significantly larger than d,m a direct proof fails.

In contrast we observe that Zt is a Markov Chain for a large class of online

algorithms. Building on this observation, first we show that the underlying Markov

Chain has a unique stationary distribution and it mixes rapidly. Then we use the

stationary distribution of the Markov Chain to prove our concentration bounds.

However, that is not the end of the story. We do not have a closed form expression

Page 45: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 33

for the stationary distribution of the chain, because we are dealing with an infinite

state space continuous time Markov Chain where the transition rates are complex

functions of the states. Instead, we use the following trick. Suppose we want to prove

that Zt is contained in an interval [k∗ − f(m, d), k∗ + f(m, d)] for some k∗ ∈ N with

high probability, where f(m, d) is a function of m, d that does not depend on t. We

consider a sequence of pairs of states P1 := (k∗ − 1, k∗ + 1), P2 := (k∗ − 2, k∗ + 2),

etc. We show that if the Markov Chain is at any of the states of Pi, it is more likely

(by an additive function of m, d) that it jumps to a state of Pi−1 as opposed to Pi+1.

Using balance equations and simple algebraic manipulations, this implies that the

probability of states in Pi geometrically decrease as i increases. In other words Zt is

concentrated in a small interval around k∗. We believe that this technique can be

used in studying other complex stochastic processes.

2.3 Performance of the Optimum and Periodic Al-

gorithms

In this section we lower-bound the loss of the optimum solutions in terms of d. In

particular, we prove the following theorems.

Theorem 2.3.1. If m > 10d, then for any T > 0

L(OPT) ≥ 1

2d+ 1 + d2/m.

Theorem 2.3.2. If m > 10d, then for any T > 0,

L(OMN) ≥ e−d−d2/m

d+ 1 + d2/m

Before proving the above theorems, it is useful to study the evolution of the system

in the case of the inactive algorithm, i.e., where the online algorithm does nothing

and no agents ever get matched. We later use this analysis in this section, as well as

Section 2.4 and Section 2.5.

Page 46: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 34

We adopt the notation At and Zt to denote the agents in the pool and the pool

size in this case. Observe that by definition for any matching algorithm and any

realization of the process,

Zt ≤ Zt. (2.3.1)

Using the above equation, in the following fact we show that for any matching algo-

rithm E [Zt] ≤ m.

Proposition 2.3.3. For any t0 ≥ 0,

P[Zt0 = `

]≤ m`

`!.

Therefore, Zt is distributed as a Poisson random variable of rate m(1− e−t0), so

E[Zt0

]= (1− e−t0)m.

Proof. Let K be a random variable indicating the number agents who enter the pool

in the interval [0, t0]. By Bayes rule,

P[Zt0 = `

]=∞∑k=0

P[Zt0 = `,K = k

]=∞∑k=0

P[Zt0 = `|K = k

]· (mt0)ke−mt0

k!,

where the last equation follows by the fact that arrival rate of the agents is a Poisson

random variable of rate m.

Now, conditioned on the event that an agent a arrives in the interval [0, t0], the

probability that she is in the pool at time t0 is at least,

P [Xai = 1] =

∫ t0

t=0

1

t0P [s(ai) ≥ t0 − t] dt =

1

t0

∫ t0

t=0

et−t0dt =1− e−t0

t0.

Therefore, conditioned on K = k, the distribution of the number of agents at time t0

is a Binomial random variable B(k, p), where p := (1− e−t0)/t0. Let µ = m(1− e−t0),

Page 47: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 35

we have

P[Zt0 = `

]=

∞∑k=`

(k

`

)· p` · (1− p)k−` (mt0)ke−mt0

k!

=∞∑k=`

mke−mt0

`!(k − `)!(1− e−t0)`(t0 − 1 + e−t0)k−`

=m`e−mt0µ`

`!

∞∑k=`

(mt0 − µ)k−`

(k − `)!=µ`e−µ

`!.

2.3.1 Loss of the Optimum Online Algorithm

In this section, we prove Theorem 2.3.1. Let ζ be the expected pool size of the OPT,

ζ := Et∼unif[0,T ] [Zt]

Since OPT does not know Act , each critical agent perishes with probability 1. There-

fore,

L(OPT) =1

m · TE[∫ T

t=0

Ztdt

]=

ζT

mT= ζ/m. (2.3.2)

To finish the proof we need to lower bound ζ by m/(2d+ 1 + d2/m). We provide an

indirect proof by showing a lower-bound on L(OPT) which in turn lower-bounds ζ.

Our idea is to lower-bound the probability that an agent does not have any ac-

ceptable transactions throughout her sojourn, and this directly gives a lower-bound

on L(OPT) as those agents cannot be matched under any algorithm. Fix an agent

a ∈ A. Say a enters the market at a time t0 ∼ unif[0, T ], and s(a) = t, we can write

P [N(a) = ∅] =

∫ ∞t=0

P [s(a) = t] · E[(1− d/m)|At0 |

]· E[(1− d/m)|A

nt0,t+t0

|]dt(2.3.3)

To see the above, note that a does not have any acceptable transactions, if she doesn’t

have any neighbors upon arrival, and none of the new agents that arrive during her

Page 48: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 36

sojourn are not connected to her. Using the Jensen’s inequality, we have

P [N(a) = ∅] ≥∫ ∞t=0

e−t · (1− d/m)E[Zt0 ] · (1− d/m)E[|Ant0,t+t0 |]dt

=

∫ ∞t=0

e−t · (1− d/m)ζ · (1− d/m)mtdt

The last equality follows by the fact that E[|Ant0,t+t0|

]= mt. Since d/m < 1/10,

1− d/m ≥ e−d/m−d2/m2

,

L(OPT) ≥ P [N(a) = ∅] ≥ e−ζ(d/m+d2/m2)

∫ ∞t=0

e−t(1+d+d2/m)dt ≥ 1− ζ(1 + d/m)d/m

1 + d+ d2/m(2.3.4)

Putting (2.3.2) and (2.3.4) together, for β := ζd/m we get

L(OPT) ≥ max{1− β(1 + d/m)

1 + d+ d2/m,β

d} ≥ 1

2d+ 1 + d2/m

where the last inequality follows by letting β = d2d+1+d2/m

be the minimizer of the

middle expression.

2.3.2 Loss of the Omniscient Algorithm

In this section, we prove Theorem 2.3.2. This demonstrates that, in the case that

the planner observes the critical agents, no policy can yield a faster-than-exponential

decrease in losses, as a function of the average degree of each agent.32

The proof is very similar to Theorem 2.3.1. Let ζ be the expected pool size of the

OMN,

ζ := Et∼unif[0,T ] [Zt] .

By (2.3.1) and Proposition 2.3.3,

ζ ≤ Et∼unif[0,T ]

[Zt

]≤ m.

32This, in fact, proves a much more powerful claim: It shows that even if the planner knows whenagents are going to be critical upon their arrival, she still cannot to faster-than-exponential decreasein losses.

Page 49: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 37

Note that (2.3.2) does not hold in this case because the omniscient algorithm knows

the set of critical agents at time t.

Now, fix an agent a ∈ A, and let us lower-bound the probability that N(a) = ∅.Say a enters the market at time t0 ∼ unif[0, T ] and s(a) = t, then

P [N(a) = ∅] =

∫ ∞t=0

P [s(a) = t] · E[(1− d/m)Zt0

]· E[(1− d/m)|A

nt0,t+t0

|]dt

≥∫ ∞t=0

e−t(1− d/m)ζ+mtdt ≥ e−ζ(1+d/m)d/m

1 + d+ d2/m≥ e−d−d

2/m

1 + d+ d2/m.

where the first inequality uses the Jensen’s inequality and the second inequality uses

the fact that when d/m < 1/10, 1− d/m ≥ e−d/m−d2/m2

.

2.4 Modeling an Online Algorithm as a Markov

Chain

2.4.1 Background

As an important preliminary, we establish that under both of the Patient and Greedy

algorithms the random processes Zt are Markovian, have unique stationary distribu-

tions, and mix rapidly to the stationary distribution. Before getting into the details

we provide a brief overview on continuous time Markov Chains. We refer interested

readers to [64, 56] for detailed discussions.

Let Zt be a continuous time Markov Chain on the non-negative integers (N) that

starts from state 0. For any two states i, j ∈ N , we assume that the rate of going

from i to j is ri→j ≥ 0. The rate matrix Q ∈ N× N is defined as follows,

Q(i, j) :=

ri→j if i 6= j,∑k 6=i−ri→k otherwise.

Note that, by definition, the sum of the entries in each row of Q is zero. It turns out

Page 50: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 38

that (see e.g., [64, Theorem 2.1.1]) the transition probability in t units of time is,

etQ =∞∑i=0

tiQi

i!.

Let Pt := etQ be the transition probability matrix of the Markov Chain in t time

units. It follows that,

d

dtPt = PtQ. (2.4.1)

In particular, in any infinitesimal time step dt, the chain moves based on Q · dt.A Markov Chain is irreducible if for any pair of states i, j ∈ N , j is reachable

from i with a non-zero probability. Fix a state i ≥ 0, and suppose that Zt0 = i, and

let T1 be the first jump out of i (note that T1 is distributed as an exponential random

variable). State i is positive recurrent iff

E [inf{t ≥ T1 : Zt = i}|Zt0 = i] <∞ (2.4.2)

The ergodic theorem [64, Theorem 3.8.1] entails that a continuous time Markov

Chain has a unique stationary distribution if and only if it has a positive recurrent

state.

Let π : N → R+ be the stationary distribution of a Markov chain. It follows by

the definition that for any t ≥ 0, Pt = πPt. The balance equations of a Markov chain

say that for any S ⊆ N ,

∑i∈S,j /∈S

π(i)ri→j =∑

i∈S,j /∈S

π(j)rj→i. (2.4.3)

Let zt(.) be the distribution of Zt at time t ≥ 0, i.e., zt(i) := P [Zt = i] for any

integer i ≥ 0. For any ε > 0, we define the mixing time (in total variation distance)

of this Markov Chain as follows,

τmix(ε) = inf{t : ‖zt − π‖TV :=

∞∑k=0

|π(k)− zt(k)| ≤ ε}. (2.4.4)

Page 51: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 39

2.4.2 Markov Chain Characterization

First, we argue that Zt is a Markov process under the Patient and Greedy algorithms.

This follows from the following simple observation.

Proposition 2.4.1. Under either of Greedy or Patient algorithms, for any t ≥ 0,

conditioned on Zt, the distribution of Gt is uniquely defined. So, given Zt, Gt is

conditionally independent of Zt′ for t′ < t.

Proof. Under the Greedy algorithm, at any time t ≥ 0, |Et| = 0. Therefore, condi-

tioned on Zt, Gt is an empty graph with |Zt| vertices.

For the Patient algorithm, note that the algorithm never looks at the edges be-

tween non-critical agents, so the algorithm is oblivious to these edges. It follows that

under the Patient algorithm, for any t ≥ 0, conditioned on Zt, Gt is an Erdos-Renyi

random graph with |Zt| vertices and parameter d/m.

The following is the main theorem of this section.

Theorem 2.4.2. For the Patient and Greedy algorithms and any 0 ≤ t0 < t1,

P [Zt1|Zt for 0 ≤ t < t1] = P [Zt1|Zt for t0 ≤ t < t1] .

The corresponding Markov Chains have unique stationary distributions and mix in

time O(log(m) log(1/ε)) in total variation distance:

τmix(ε) ≤ O(log(m) log(1/ε)).

The proof of the theorem can be found in the Appendix A.2.

This theorem is crucial in justifying our focus on long-run results in Section 2.2,

since these Markov chains converge very rapidly (in O(log(m)) time) to their station-

ary distributions.

Page 52: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 40

2.5 Performance Analysis

In this section we upper bound L(Greedy) and L(Patient) as a function of d, and we

upper bound L(Patient(α)) as a function of d and α.

We prove the following three theorems.33

Theorem 2.5.1. For any ε ≥ 0 and T > 0,

L(Greedy) ≤ log(2)

d+τmix(ε)

T+ 6ε+O

( log(m/d)√dm

), (2.5.1)

where τmix(ε) ≤ 2 log(m/d) log(2/ε).

Theorem 2.5.2. For any ε > 0 and T > 0,

L(Patient) ≤ maxz∈[1/2,1]

(z + O(1/

√m))e−zd +

τmix(ε)

T+εm

d2+ 2/m, (2.5.2)

where τmix(ε) ≤ 8 log(m) log(4/ε).

Theorem 2.5.3. Let α := 1/α + 1. For any ε > 0 and T > 0,

L(Patient(α)) ≤ maxz∈[1/2,1]

(z + O(

√α/m)

)e−zd/α +

τmix(ε)

αT+εmα

d2+ 2α/m,

where τmix(ε) ≤ 8 log(m/α) log(4/ε).

We will prove Theorem 2.5.1 in Subsection 2.5.1, Theorem 2.5.2 in Subsection 2.5.2

and Theorem 2.5.3 in Subsection 2.5.3.

2.5.1 Loss of the Greedy Algorithm

In this part we upper bound L(Greedy). We crucially exploit the fact that Zt is a

Markov Chain and has a unique stationary distribution, π : N → R+. Our proof

33We use the operators O and O in the standard way. That is, f(m) = O(g(m)) iff there existsa positive real number N and a real number m0 such that |f(m)| ≤ N |g(m)| for all m ≥ m0. O issimilar but ignores logarithmic factors, i.e. f(m) = O(g(m)) iff f(m) = O(g(m) logk g(m)) for somek.

Page 53: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 41

proceeds in three steps: First, we show that L(Greedy) is bounded by a function of

the expected pool size. Second, we show that the stationary distribution is highly

concentrated around some point k∗, which we characterize. Third, we show that k∗

is close to the expected pool size.

Let ζ := EZ∼µ [Z] be the expected size of the pool under the stationary distribution

of the Markov Chain on Zt. First, observe that if the Markov Chain on Zt is mixed,

then the agents perish at the rate of ζ, as the pool is almost always an empty graph

under the Greedy algorithm. Roughly speaking, if we run the Greedy algorithm for

a sufficiently long time then Markov Chain on size of the pool mixes and we get

L(Greedy) ' ζm

. This observation is made rigorous in the following lemma. Note

that as T and m grow, the first three terms become negligible.

Lemma 2.5.4. For any ε > 0, and T > 0,

L(Greedy) ≤ τmix(ε)

T+ 6ε+

1

m2−6m +

EZ∼π [Z]

m.

The theorem is proved in the Appendix A.3.1.

The proof of the above lemma involves lots of algebra, but the intuition is as

follows: The EZ∼π [Z]m

term is the loss under the stationary distribution. This is equal to

L(Greedy) with two approximations: First, it takes some time for the chain to transit

to the stationary distribution. Second, even when the chain mixes, the distribution

of the chain is not exactly equal to the stationary distribution. The τmix(ε)T

term

provides an upper bound for the loss associated with the first approximation, and the

term (6ε + 1m

2−6m) provides an upper bound for the loss associated with the second

approximation.

Given Lemma 2.5.4 , in the rest of the proof we just need to get an upper bound for

EZ∼π [Z]. Unfortunately, we do not have any closed form expression of the stationary

distribution, π(·). Instead, we use the balance equations of the Markov Chain defined

on Zt to characterize π(·) and upper bound EZ∼π [Z].

Let us rigorously define the transition probability operator of the Markov Chain

on Zt. For any pool size k, the Markov Chain transits only to the states k + 1 or

Page 54: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 42

kk − 1 k + 1

Figure 2.3: An illustration of the transition paths of the Zt Markov Chain under theGreedy algorithm

k − 1. It transits to state k + 1 if a new agent arrives and the market-maker cannot

match her (i.e., the new agent does not have any edge to the agents currently in the

pool) and the Markov Chain transits to the state k − 1 if a new agent arrives and

is matched or an agent currently in the pool gets critical. Thus, the transition rates

rk→k+1 and rk→k−1 are defined as follows,

rk→k+1 := m(

1− d

m

)k(2.5.3)

rk→k−1 := k +m(

1−(

1− d

m

)k). (2.5.4)

In the above equations we used the fact that agents arrive at rate m, they perish at

rate 1 and the probability of an acceptable transaction between two agents is d/m.

Let us write down the balance equation for the above Markov Chain (see equation

(2.4.3) for the full generality). Consider the cut separating the states 0, 1, 2, . . . , k−1

from the rest (see Figure 2.3 for an illustration). It follows that,

π(k − 1)rk−1→k = π(k)rk→k−1. (2.5.5)

Now, we are ready to characterize the stationary distribution π(·). In the following

proposition we show that there is a number z∗ ≤ log(2)m/d such that under the

stationary distribution, the size of the pool is highly concentrated in an interval of

length O(√m/d) around z∗.34

Proposition 2.5.5. There exists m/(2d + 1) ≤ k∗ < log(2)m/d such that for any

34In this paper, log x refers to the natural log of x.

Page 55: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 43

σ > 1,

Pπ[k∗ − σ

√2m/d ≤ Z ≤ k∗ + σ

√2m/d

]≥ 1−O(

√m/d)e−σ

2

.

Proof. Let us define f : R→ R as an interpolation of the difference of transition rates

over the reals,

f(x) := m(1− d/m)x − (x+m(1− (1− d/m)x)).

In particular, observe that f(k) = rk→k+1−rk→k−1. The above function is a decreasing

convex function over non-negative reals. We define k∗ as the unique root of this

function. Let k∗min := m/(2d+ 1) and k∗max := log(2)m/d. We show that f(k∗min) ≥ 0

and f(k∗max) ≤ 0. This shows that k∗min ≤ k∗ < k∗max.

f(k∗min) ≥ −k∗min −m+ 2m(1− d/m)k∗min ≥ 2m

(1− k∗mind

m

)− k∗min −m = 0,

f(k∗max) ≤ −k∗max −m+ 2m(1− d/m)k∗max ≤ −k∗max −m+ 2me−(k∗max)d/m = −k∗max ≤ 0.

In the first inequality we used equation (A.1.4) from Section A.1.

It remains to show that π is highly concentrated around k∗. We prove this in

several steps.

Lemma 2.5.6. For any integer k ≥ k∗

π(k + 1)

π(k)≤ e−(k−k∗)d/m.

And, for any k ≤ k∗, π(k − 1)/π(k) ≤ e−(k∗−k+1)d/m.

Proof. For k ≥ k∗, by (2.5.3), (2.5.4), (2.5.5),

π(k)

π(k + 1)=

(k + 1) +m(1− (1− d/m)k+1)

m(1− d/m)k=k − k∗ + 1−m(1− d/m)k+1 + 2m(1− d/m)k

m(1− d/m)k

Page 56: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 44

where we used the definition of k∗. Therefore,

π(k)

π(k + 1)≥ −(1− d/m) +

2

(1− d/m)k−k∗≥ 1

(1− d/m)k−k∗≥ e−(k∗−k)d/m

where the last inequality uses 1 − x ≤ e−x. Multiplying across the inequality yields

the claim. Similarly, we can prove the second conclusion. For k ≤ k∗,

π(k − 1)

π(k)=

k − k∗ −m(1− d/m)k + 2m(1− d/m)k∗

m(1− d/m)k−1

≤ −(1− d/m) + 2(1− d/m)k∗−k+1 ≤ (1− d/m)k

∗−k+1 ≤ e−(k∗−k+1)d/m,

where the second to last inequality uses k ≤ k∗.

By repeated application of the above lemma, for any integer k ≥ k∗, we get35

π(k) ≤ π(k)

π(dk∗e)≤ E

(− d

m

k−1∑i=dk∗e

(i− k∗))≤ E(−d(k − k∗ − 1)2/2m). (2.5.6)

We are almost done. For any σ > 0,

∞∑k=k∗+1+σ

√2m/d

π(k) ≤∞∑

k=k∗+1+σ√

2m/d

e−d(k−k∗−1)2/2m =∞∑k=0

e−d(k+σ√

2m/d)2/2m

≤ e−σ2

min{1/2, σ√d/2m}

The last inequality uses equation (A.1.1) from Appendix A.1. We can similarly upper

bound∑k∗−σ

√2m/d

k=0 π(k).

Proposition 2.5.5 shows that the probability that the size of the pool falls outside

an interval of length O(√m/d) around k∗ drops exponentially fast as the market size

grows. We also remark that the upper bound on k∗ becomes tight as d goes to infinity.

35dk∗e indicates the smallest integer larger than k∗.

Page 57: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 45

The following lemma exploits Proposition 2.5.5 to show that the expected value

of the pool size under the stationary distribution is close to k∗.

Lemma 2.5.7. For k∗ as in Proposition 2.5.5 ,

EZ∼π [Z] ≤ k∗ +O(√m/d log(m/d)).

This lemma is proved in the Appendix A.3.2.

Now, Theorem 2.5.1. follows immediately by Lemma 2.5.4 and Lemma 2.5.7

because we have

EZ∼π [Z]

m≤ 1

m(k∗ +O(

√m logm)) ≤ log(2)

d+ o(1)

2.5.2 Loss of the Patient Algorithm

Throughout this section we use Zt to denote the size of the pool under Patient. Let

π : N → R+ be the unique stationary distribution of the Markov Chain on Zt, and

let ζ := EZ∼π [Z] be the expected size of the pool under that distribution.

Once more our proof strategy proceeds in three steps. First, we show that

L(Patient) is bounded by a function of EZ∼π[Z(1− d/m)Z−1

]. Second, we show that

the stationary distribution of Zt is highly concentrated around some point k∗. Third,

we use this concentration result to produce an upper bound for EZ∼π[Z(1− d/m)Z−1

].

By Proposition 2.4.1, at any point in time Gt is an Erdos-Reyni random graph.

Thus, once an agent becomes critical, he has no acceptable transactions with proba-

bility (1− d/m)Zt−1. Since each agent becomes critical with rate 1, if we run Patient

for a sufficiently long time, then L(Patient) ≈ ζm

(1− d/m)ζ−1. The following lemma

makes the above discussion rigorous.

Lemma 2.5.8. For any ε > 0 and T > 0,

L(Patient) ≤ 1

mEZ∼π

[Z(1− d/m)Z−1

]+τmix(ε)

T+εm

d2.

Proof. See Appendix A.3.3.

Page 58: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 46

k + 1k k + 2

Figure 2.4: An illustration of the transition paths of the Zt Markov Chain under thePatient Algorithm

So in the rest of the proof we just need to lower bound EZ∼π[Z(1− d/m)Z−1

].

As in the Greedy case, we do not have a closed form expression for the stationary

distribution, π(·). Instead, we use the balance equations of the Markov Chain on Zt

to show that π is highly concentrated around a number k∗ where k∗ ∈ [m/2,m].

Let us start by defining the transition probability operator of the Markov Chain

on Zt. For any pool size k, the Markov Chain transits only to states k + 1, k − 1, or

k − 2. The Markov Chain transits to state k + 1 if a new agent arrives, to the state

k − 1 if an agent gets critical and the the planner cannot match him, and it transits

to state k − 2 if an agent gets critical and the planner matches him.

Remember that agents arrive with the rate m, they become critical with the rate

of 1 and the probability of an acceptable transaction between two agents is d/m.

Thus, the transition rates rk→k+1, rk→k−1, and rk→k−2 are defined as follows,

rk→k+1 := m (2.5.7)

rk→k−1 := k(

1− d

m

)k−1

(2.5.8)

rk→k−2 := k(

1−(

1− d

m

)k−1). (2.5.9)

Let us write down the balance equation for the above Markov Chain (see equation

(2.4.3) for the full generality). Consider the cut separating the states 0, 1, 2, . . . , k

from the rest (see Figure 2.4 for an illustration). It follows that

π(k)rk→k+1 = π(k + 1)rk+1→k + π(k + 1)rk+1→k−1 + π(k + 2)rk+2→k (2.5.10)

Page 59: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 47

Now we can characterize π(·). We show that under the stationary distribution,

the size of the pool is highly concentrated around a number k∗ where k∗ ∈ [m/2,m].

Remember that under the Greedy algorithm, the concentration was around k∗ ∈[ m2d+1

, log(2)md

], whereas here it is at least m/2.

Proposition 2.5.9 (Patient Concentration). There exists a number m/2− 2 ≤ k∗ ≤m− 1 such that for any σ ≥ 1,

Pπ[k∗ − σ

√4m ≤ Z

]≥ 1− 2

√me−σ

2

, P[Z ≤ k∗ + σ

√4m]≥ 1− 8

√me− σ2√m

2σ+√m .

Proof Overview. The proof idea is similar to Proposition 2.5.5. First, let us rewrite

(2.5.10) by replacing transition probabilities from (2.5.7), (2.5.8), and (2.5.9):

mπ(k) = (k + 1)π(k + 1) + (k + 2)(

1−(

1− d

m

)k+1)π(k + 2) (2.5.11)

Let us define a continous f : R→ R as follows,

f(x) := m− (x+ 1)− (x+ 2)(1− (1− d/m)x+1). (2.5.12)

It follows that

f(m− 1) ≤ 0, f(m/2− 2) > 0,

so f(.) has a root k∗ such that m/2− 2 < k∗ < m. In the rest of the proof we show

that the states that are far from k∗ have very small probability in the stationary

distribution, which completes the proof of Proposition 2.5.9. This part of the proof

involves lost of algebra and is essentially very similar to the proof of the Proposi-

tion 2.5.5. We refer the interested reader to the Subsection A.3.4 for the complete

proof of this last step.

Since the stationary distribution of Zt is highly concentrated around k∗ ∈ [m/2−2,m− 1] by the above proposition, we get an upper-bound for EZ∼π

[Z(1− d/m)Z

],

which is proved in the Appendix A.3.5.

Page 60: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 48

Lemma 2.5.10. For any d ≥ 0 and sufficiently large m,

EZ∼π[Z(1− d/m)Z

]≤ max

z∈[m/2,m](z + O(

√m))(1− d/m)z + 2.

Now Theorem 2.5.2 follows immediately by combining Lemma 2.5.8 and Lemma 2.5.10.

2.5.3 Loss of the Patient(α) Algorithm

Our idea is to slow down the process and use Theorem 2.5.2 to analyze the Patient(α)

algorithm. More precisely, instead of analyzing Patient(α) algorithm on a (m, d, 1)

market we analyze the Patient algorithm on a (m/α, d/α, 1) market. First we need

to prove a lemma on the equivalence of markets with different criticality rates.

Definition 2.5.11 (Market Equivalence). An α-scaling of a dynamic matching mar-

ket (m, d, λ) is defined as follows. Given any realization of this market, i.e., given

(Act , Ant , E) for any 0 ≤ t ≤ ∞, we construct another realization (Act

′, Ant′, E ′) with

(Act′, Ant

′) = (Acα·t, Anα·t) and the same set of acceptable transactions. We say two dy-

namic matching markets (m, d, λ) and (m′, d′, λ′) are equivalent if one is an α-scaling

of the other.

It turns out that for any α ≥ 0, and any time t, any of the Greedy, Patient or

Patient(α) algorithms (and in general any time-scale independent online algorithm)

the set of matched agents at time t of a realization of a (m, d, λ) matching market is

the same as the set of matched agents at time αt of an α-scaling of that realization.

The following proposition makes this rigorous.

Proposition 2.5.12. For any m, d, λ the (m/λ, d/λ, 1) matching market is equivalent

to the (m, d, λ) matching market.36

Now, Theorem 2.5.3 follows simply by combining the above proposition and The-

orem 2.5.2. First, by the additivity of the Poisson process, the loss of the Patient(α)

algorithm in a (m, d, 1) matching market is equal to the loss of the Patient algorithm

36The proof is by inspection.

Page 61: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 49

in a (m, d, α) matching market, where α = 1/α + 1. Second, by the above proposi-

tion, the loss of the Patient algorithm on a (m, d, α) matching market at time T is

the same as the loss of this algorithm on a (m/α, d/α, 1) market at time αT . The

latter is upper-bounded in Theorem 2.5.2.

2.6 Welfare and Optimal Waiting Time under Dis-

counting

Theorem 2.6.1. There is a number m/2 − 2 ≤ k∗ ≤ m (as defined in Proposi-

tion 2.5.9) such that for any T ≥ 0, δ ≥ 0 and ε < 1/2m2,

W(Patient) ≥ T − T0

T

( 1− qk∗+O(√m)

1 + δ/2− 12qk∗−O(

√m)− O(m−3/2)

)W(Patient) ≤ 2T0

T+T − T0

T

( 1− qk∗−O(√m)

1 + δ/2− 12qk∗+O(

√m)

+ O(m−3/2))

where T0 = 16 log(m) log(4/ε). As a corollary, for any α ≥ 0, and α = 1/α + 1,

W(Patient(α)) ≥ T − T0

T

( 1− qk∗/α+O(√m)

1 + δ/2α− 12qk∗/α−O(

√m)− O(m−3/2)

)W(Patient(α)) ≤ 2T0

T+T − T0

T

( 1− qk∗/α−O(√m)

1 + δ/2α− 12qk∗/α+O(

√m)

+ O(m−3/2))

Theorem 2.6.2. If m > 10d, for any T ≥ 0,

W(Greedy) ≤ 1− 1

2d+ 1 + d2/m.

Say an agent a is arrived at time ta(a). We let Xt be the sum of the potential

utility of the agents in At:

Xt =∑a∈At

e−δ(t−ta(a)),

i.e., if we match all of the agents currently in the pool immediately, the total utility

Page 62: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 50

that they receive is exactly Xt.

For t0, ε > 0, let Wt0,t0+ε be the expected total utility of the agents who are

matched in the interval [t0, t0 + ε]. By definition the social welfare of an online

algorithm, we have:

W(Patient) = E[

1

T

∫ T

t=0

Wt,t+dtdt

]=

1

T

∫ T

t=0

E [Wt,t+dt] dt

2.6.1 Welfare of the Patient Algorithm

In this section, we prove Theorem 2.6.1. All agents are equally likely to become

critical at each moment. From the perspective of the planner, all agents are equally

likely to be the neighbor of a critical agent. Hence, the expected utility of each of the

agents who are matched at time t under the Patient algorithm is Xt/Zt. Thus,

W(Patient) =1

mT

∫ T

t=0

E[2Xt

ZtZt(1− (1− d/m)Zt)dt

]=

2

mT

∫ T

t=0

E[Xt(1− (1− d/m)Zt)

]dt

(2.6.1)

First, we prove the following lemma.

Lemma 2.6.3. For any ε < 1/2m2 and t ≥ τmix(ε),

E [Xt](

1−qk∗+O(√m))−O(m−1/2) ≤ E

[Xt(1− qZt)

]≤ E [Xt]

(1−qk∗−O(

√m))

+O(m−1/2).

The proof of above lemma basically follows from the concentration inequalities

proved in Proposition 2.5.9. See Section A.4 for the details.

It remains to estimate E [Xt]. This is done in the following lemma.

Lemma 2.6.4. For any ε < 1/2m2, t1 ≥ 16 log(m) log(4/ε) ≥ 2τmix(ε),

m−O(m−1/2)

2 + δ − qk∗−O(√m)≤ E [Xt1 ] ≤ m+O(m−1/2)

2 + δ − qk∗+O(√m)

Page 63: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 51

Proof. Let η > 0 be very close to zero (eventually we let η → 0). Since we have a

(m, d, 1) matching market, using equation (2.4.1) for any t ≥ 0 we have,

E [Xt+η|Xt, Zt] = Xt(e−ηδ) +mη − ηZt

(Xt

ZtqZt)− 2ηZt

(Xt

Zt(1− qZt)

)±O(η2)

The first term in the RHS follows from the exponential discount in the utility of the

agents in the pool. The second term in the RHS stands for the new arrivals. The

third term stands for the perished agents and the last term stands for the the matched

agents. We use the notation A = B ± C to denote B − C ≤ A ≤ B + C.

We use e−x = 1− x+O(x2) and rearrange the equation to get,

E [Xt+η|Xt, Zt] = mη +Xt − η(1 + δ)Xt − ηXt(1− qZt)±O(η2).

Using Lemma 2.6.3 for any t ≥ τmix(ε) we can estimate E[Xt(1− qZt)

]. Taking

expectation from both sides of the above inequality we get,

E [Xt+ε]− E [Xt]

η= m− E [Xt] (2 + δ − qk∗±O(

√m))±O(m−1/2)−O(η)

Letting η → 0, and solving the above differential equation from τmix(ε) to t1 we get

E [Xt1 ] =m±O(m−1/2)

2 + δ − qk∗±O(√m)

+ C1E(− (δ + 2− qk∗±O(

√m))(t1 − τmix(ε))

).

Now, for t1 = τmix(ε) we use the initial condition E[Xτmix(ε)

]≤ E

[Zτmix(ε)

]≤ m,

and we can let C1 ≤ m. Finally, since t1 ≥ 2τmix(ε) and t1/2 ≥ 2 log(m) we can

upper-bound the latter term with O(m−1/2).

Let T0 = 16 log(m) log(4/ε). Since (for any matching algorithm) the sum of the

Page 64: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 52

utilities of the agents that leave the market before time T0 is at most mT0 in expec-

tation, by the above two lemmas, we can write

W(Patient) =2

mT

(mT0 +

∫ T

T0

E[Xt(1− qZt)

]dt)

≤ 2T0

T+

2

mT

∫ T

T0

(m(1− qk∗−O(√m))

δ + 2− qk∗+O(√m)

+ O(m−1/2))dt

≤ 2T0

T+T − T0

T

( 1− qk∗−O(√m)

1 + δ/2− 12qk∗+O(

√m)

+ O(m−3/2))

Similarly, since the sum of the utilities of the agents that leave the market by time

T0 is non-negative, we can show that

W(Patient) ≥ T − T0

T

( 1− qk∗+O(√m)

1 + δ/2− 12qk∗−O(

√m)− O(m−3/2)

)2.6.2 Welfare of the Greedy Algorithm

Here, we upper-bound the welfare of the optimum online algorithm, OPT, and that

immediately upper-bounds the welfare of the Greedy algorithm. Recall that by The-

orem 2.3.1, for any T >, 1/(2d + 1 + d2/m) fraction of the agents perish in OPT.

On the other hand, by definition of utility, we receive a utility at most 1 from any

matched agent. Therefore, even if all of the matched agents receive a utility of 1, (for

any δ ≥ 0)

W(Greedy) ≤W(OPT) ≤ 1− 1

2d+ 1 + d2/m.

2.7 Incentive-Compatible Mechanisms

In this section we design a dynamic mechanism to elicit the departure times of agents.

As alluded to in Subsection 2.2.3, we assume that agents only have statistical knowl-

edge about the rest of the market: That is, each agent knows the market parameters

(m, d, 1), her own status (present, critical, perished), and the details of the dynamic

mechanism that the market-maker is executing. Agents do not observe the graph Gt.

Page 65: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 53

Each agent a chooses a mixed strategy, that is she reports getting critical at an

infinitesimal time [t, t+dt] with rate ca(t)dt. In other words, each agent a has a clock

that ticks with rate ca(t) at time t and she reports criticality when the clock ticks. We

assume each agent’s strategy function, ca(·) is well-behaved, i.e., it is non-negative,

continuously differentiable and continuously integrable. Note that since the agent

can only observe the parameters of the market ca(·) can depend on any parameter

in our model but this function is constant in different sample paths of the stochastic

process.

A strategy profile C is a vector of well-behaved functions for each agent in the

market, that is, C = [ca]a∈A. For an agent a and a strategy profile C, let E [uC(a)]

be the expected utility of a under the strategy profile C. Note that for any C, a,

0 ≤ E [uC(a)] ≤ 1. Given a strategy profile C = [ca]a∈A, let C − ca + ca denote a

strategy profile same as C but for agent a who is playing ca rather than ca. The

following definition introduces our solution concept.

Definition 2.7.1. A strategy profile C is a strong ε-Nash equilibrium if for any agent

a and any well-behaved function ca(.),

1− E [uC(a)] ≤ (1 + ε)(1− E [uC−ca+ca ]).

Note that the solution concept we are introducing here is different from the

usual definition of an ε-Nash equilibrium, where the condition is either E [uC(a)] ≥E [uC−ca+ca ] − ε, or E [uC(a)] ≥ (1 − ε)E [uC−ca+ca ]. The reason that we are using

1 − E [uC(a)] as a measure of distance is because we know that under Patient(α) al-

gorithm, E [uC(a)] is very close to 1, so 1−E [uC(a)] is a lower-order term. Thus, this

definition restricts us to a stronger equilibrium concept, which requires us to show

that in equilibrium agents can neither increase their utilities, nor the lower-order

terms associated with their utilities by a factor of more than ε.

Throughout this section let k∗ ∈ [m/2−2,m−1] be the root of (A.3.7) as defined

in Proposition 2.5.9, and let β := (1− d/m)k∗. In this section we show that if δ (the

discount rate) is no more than β, then the strategy vector ca(t) = 0 for all agents

a and t is an ε-mixed strategy Nash equilibrium for ε very close to zero. In other

Page 66: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 54

words, if all other agents are truthful, an agent’s utility from being truthful is almost

as large as any other strategy.

Theorem 2.7.2. If the market is at stationary and δ ≤ β, then ca(t) = 0 for all a, t

is a strong O(d4 log3(m)/√m)-Nash equilibrium for Patient-Mechanism(∞).

By our market equivalence result (Proposition 2.5.12), Theorem 2.7.2 leads to the

following corollary.

Corollary 2.7.3. Let α = 1/α+1 and β(α) = α(1−d/m)m/α. If the market is at sta-

tionary and δ ≤ β(α), then ca(t) = 0 for all a, t is a strong O((d/α)4 log3(m/α)/√m/α)-

Nash equilibrium for Patient-Mechanism(α).

The proof of the above theorem is involved but the basic idea is very easy. If

an agent reports getting critical at the time of arrival she will receive a utility of

1 − β. On the other hand, if she is truthful (assuming δ = 0) she will receive about

1 − β/2. In the course of the proof we show that by choosing any strategy vector

c(·) the expected utility of an agent interpolates between these two numbers, so it is

maximized when she is truthful.

The precise proof of the theorem is based on Lemma 2.7.4. In this lemma, we

upper-bound the the utility of an agent for any arbitrary strategy, given that all other

agents are truthful.

Lemma 2.7.4. Let Z0 be in the stationary distribution. Suppose a enters the market

at time 0. If δ < β, and 10d4 log3(m) ≤√m, then for any well-behaved function c(.),

E [uc(a)] ≤ 2(1− β)

2− β + δ+O

(d4 log3(m)/

√m)β,

Proof. We present the sketch of the proof here. The full proof can be found in

Section A.5.

For an agent a who arrives the market at time t0, let P [a ∈ At+t0 ] be the proba-

bility that agent a is in the pool at time t+ t0. Observe that an agent gets matched

in one of the following two ways: First, a becomes critical in the interval [t, t+ ε] with

probability ε ·P [a ∈ At] (1+ c(t)) and if she is critical she is matched with probability

Page 67: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 55

E[(1− (1− d/m)Zt−1|a ∈ At

]. Second, a may also get matched (without being crit-

ical) in the interval [t, t + ε]. Observe that if an agent b ∈ At where b 6= a becomes

critical she will be matched with a with probability (1 − (1 − d/m)Zt−1)/(Zt − 1).

Therefore, the probability that a is matched at [t, t+ ε] without being critical is

P [a ∈ At] · E[ε · (Zt − 1)

1− (1− d/m)Zt−1

Zt − 1|a ∈ At

]= ε · P [a ∈ At]E

[1− (1− d/m)Zt−1|a ∈ At

],

and the probability of getting matched at [t, t+ ε] is:

ε(2 + c(t))E[1− (1− d/m)Zt−1|a ∈ At

]P [a ∈ At] .

Based on this expression, for any strategy of agent a we have,

E [uc(a)] ≤ β

m+

∫ t∗

t=0

(2 + c(t))E[1− (1− d/m)Zt−1|a ∈ At

]P [a ∈ At] e−δtdt

where t∗ is the moment where the expected utility that a receives in the interval

[t∗,∞) is negligible, i.e., in the best case it is at most β/m.

In order to bound the expected utility, we need to bound P [a ∈ At+t0 ]. We do this

by writing down the dynamical equation of P [a ∈ At+t0 ] evolution, and solving the as-

sociated differential equation. In addition, we need to study E[1− (1− d/m)Zt−1|a ∈ At

]to bound the utility expression. This is not easy in general; although the distribu-

tion of Zt remains stationary, the distribution of Zt conditioned on a ∈ At can be a

very different distribution. Therefore, we prove simple upper and lower bounds on

E[1− (1− d/m)Zt−1|a ∈ At

]using the concentration properties of Zt. The details

of all these calculations are presented in Section A.5, in which we finally obtain the

following closed form upper-bound on the expected utility of a:

E [uc(a)] ≤ 2dσ5

√mβ +

∫ ∞t=0

(1− β)(2 + c(t))E(−∫ t

τ=0

(2 + c(τ)− β)dτ)e−δtdt.(2.7.1)

Page 68: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 56

Finally, we show that the right hand side is maximized by letting c(t) = 0 for all

t. Let Uc(a) be the right hand side of the above equation. Let c be a function that

maximizes Uc(a) which is not equal to zero. Suppose c(t) 6= 0 for some t ≥ 0. We

define a function c : R+ → R+ and we show that if δ < β, then Uc(a) > Uc(a). Let c

be the following function,

c(τ) =

c(τ) if τ < t,

0 if t ≤ τ ≤ t+ ε,

c(τ) + c(τ − ε) if t+ ε ≤ τ ≤ t+ 2ε,

c(τ) otherwise.

In words, we push the mass of c(.) in the interval [t, t + ε] to the right. We remark

that the above function c(.) is not necessarily continuous so we need to smooth it

out. The latter can be done without introducing any errors and we do not describe

the details here. In Section A.5, we show that Uc(a)− Uc(a) is non-negative as long

as δ ≤ β, which means that the maximizer of Uc(a) is the all zero function. Plugging

in c(t) = 0 into (2.7.1) completes the proof of Lemma 2.7.4.

The proof of Theorem 2.7.2 follows simply from the above analysis.

Proof of Theorem 2.7.2. All we need to do is to lower-bound the expected utility of

an agent a if she is truthful. We omit the details as they are essentially similar. So,

if all agents are truthful,

E [u(a)] ≥ 2(1− β)

2− β + δ−O

(d4 log3(m)√m

)β.

This shows that the strategy vector corresponding to truthful agents is a strong

O(d4 log3(m)/√m)-Nash equilibrium.

Page 69: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 57

2.8 Concluding Discussions

In this paper, we developed a model of dynamic matching markets to investigate the

features of good policy. This paper innovates by accounting for stochastic departures

and analyzing the problem under a variety of information conditions. Rather than

modeling market thickness via a fixed match-function, it explicitly accounts for the

network structure that affects the planner’s options. This allows market thickness to

emerge as an endogenous phenomenon, responsive to the underlying constraints.

In this part of the paper, to connect our findings to real-world markets, we first

review the key insights of our analysis and then discuss how relaxing our modeling

assumptions would have implications for our results. At the end, we recommend some

promising extensions.

2.8.1 Insights of the Paper

There are many real-world market design problems where the timing of transactions

must be decided by a policymaker. These include paired kidney exchanges, dating

agencies, and online labor markets such as oDesk. In such markets, policymakers

face a trade-off between the speed of transactions and the thickness of the market.

It is natural to ask, “Does it matter when transactions occur? How much does it

matter?” The first insight of this paper is that waiting to thicken the market can

yield substantial welfare gains. In addition, we find that naıve local algorithms that

choose the right time to match can come close to optimal benchmarks that exploit

the whole graph structure. This shows that the right timing decision is a first order

concern in dynamic matching markets.

A key finding of our analysis is that information and waiting time are comple-

ments: If the planner lacks information about agents’ departure times, losses are

relatively large compared to simple algorithms that have access to that information.

This shows that having access short-horizon information about departure times is

especially valuable to the planner. When the urgency of individual cases is private

information, we exhibit a mechanism without transfers that elicits such information

from sufficiently patient agents.

Page 70: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 58

These results suggest that the dimension of time is a first-order concern in many

matching markets, with welfare implications that static models do not capture. They

also suggest that policymakers would reap large gains from acquiring timing infor-

mation about agent departures, such as by paying for predictive diagnostic testing or

monitoring agents’ outside options.

2.8.2 Discussion of Assumptions

In order to make this setting analytically tractable, we have made several important

simplifying assumptions. Here we discuss how relaxing those assumptions would have

implications for our results.

First, we have assumed that agents are ex ante homogeneous: They have in ex-

pectation the same average degree.37 What would happen if the planner knew that

certain agents currently in the pool were more likely to have edges with future agents?

Clearly, algorithms that treat heterogeneous agents identically could be inefficient.

However, it is an open question whether there are local algorithms, sensitive to indi-

vidual heterogeneity, that are close to optimal.

Second, we have assumed that agents’ preferences are binary: All acceptable

matches are equally good. In many settings, acceptable matches may vary in quality.

We conjecture that this would reinforce our existing results, since waiting to thicken

the market could allow planners to make better matches, in addition to increasing

the size of the matching.38

Third, we have assumed that agents have the memoryless property; that is, they

become critical at some constant Poisson rate. One might ask what would be different

if the planner knew ahead of time which agents would be long - or short - lived. Our

37Note, however, that agents are ex post heterogeneous as they have different positions in thetrade network.

38To take one natural extension, suppose that the value of an acceptable match is v for both agentsinvolved, where v is a random variable drawn iid across pairs of agents from some distribution F (·).Suppose that the Greedy and Patient algorithms are modified to select the highest match-valueamong the acceptable matches. Then the value to a matched agent under Greedy is (roughly)the highest among N draws from F (·), where N is distributed Binomial(k∗Greedy,

dm ). By contrast,

the value to a matched agent under Patient is (roughly) the highest among N draws from F (·),where N is distributed Binomial(k∗Patient,

dm ). By our previous arguments, k∗Patient > k∗Greedy, so this

strengthens our result.

Page 71: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 59

performance bounds on the Omniscient algorithm provide a partial answer to this

question: Such information may be gainful, but a large proportion of the gains can

be realized via the Patient algorithm, which uses only short-horizon information about

agent’s departure times.

In addition, our assumption on agents’ departure processes can be enriched by

assuming agents have a range of sequential states, while an independent process spec-

ifies transition rates from one state to the next, and agents who are at the “final”

state have some exogenous criticality rate. The full analysis of optimal timing under

discounting in such environment is a subject of further research. Nevertheless, our

results suggest that for small waiting costs, if the planner observes critical agents, the

Patient algorithm is close-to-optimal, and if the planner cannot observe the critical

agents, waiting until agents transit to the final state (as there is no risk of agents

perishing before that time) and then greedily matching those agents who have a risk

of perishing is close-to-optimal.

Finally, our theoretical bounds on error terms, O(√m), are small compared to the

market size only if the market is relatively large. What would happen if the market

is small, e.g. if m < 100? To check the robustness of our results to the large market

assumptions, we simulated our model for small markets. Our simulation results (see

Section A.6) show that our results continue to hold for very small markets as well.

2.8.3 Further Extensions

We suggest some promising extensions. First, one could generalize the model to allow

multiple types of agents, with the probability of an acceptable transaction differing

across type-pairs. This could capture settings where certain agents are known ex ante

to be less likely to have acceptable trades than other agents, as is the case for patients

with high Panel Reactive Antibody (PRA) scores in paired kidney exchanges. The

multiple-types framework also contains bipartite markets as a special case.

Second, one could adopt more gradual assumptions about agent departure pro-

cesses; agents could have a range of individual states with state-dependent perishing

rates, and an independent process specifying transition rates between states. Our

Page 72: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 2. DYNAMIC MATCHING MARKETS 60

model, in which agents transition to a critical state at rate λ and then perish immi-

nently, is a limit case of the multiple-states framework.

Third, it would be interesting to enrich the space of preferences in the model,

such as by allowing matches to yield a range of payoffs. Further insights may come

by making explicit the role of price in dynamic matching markets. It is not obvious

how to do so, but similar extensions have been made for static models [49, 42], and

the correct formulation may seem obvious in retrospect.

Much remains to be done in the theory of dynamic matching. As market design

expands its reach, re-engineering markets from the ground up, economists will in-

creasingly have to answer questions about the timing and frequency of transactions.

Many dynamic matching markets have important features (outlined above) that we

have not modeled explicitly. We offer this paper as a step towards systematically

understanding matching problems that take place across time.

Page 73: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

Chapter 3

Random Allocation Mechanisms

3.1 Introduction

When cash transfers are limited and goods are indivisible, it can sometimes be im-

possible to allocate goods in an envy-free (“fair”) way. This challenge is faced, for

example, when assigning students to courses, cadets to military bases, or setting a

competitive sports schedule. Early economic studies of this problem by Hylland and

Zeckhauser (1979, HZ) [43] and Bogomolnaia and Moulin (2001, BM) [19] assume

that each agent must receive just a single good and show that it is then possible

to allocate the probabilities of receiving each good in an efficient, envy-free manner.

In a recent paper, Budish, Che, Kojima, and Milgrom (2013, BCKM) [24] propose

expanding this approach to a wider set of multi-item allocation problems in which

the constraints may be more complex than merely a set of one-item-per-person con-

straints. For example, in course allocation, a student may wish to have at least one

class in science and one in humanities in a particular term. They show that for any

expected allocation that satisfies all the constraints, if the constraints have a par-

ticular “bihierarchy” structure, then the expected allocation can always be achieved

by randomizing among pure allocations in which each fractional expected allocation

is rounded up or down to an adjacent integer and all the constraints are simultane-

ously satisfied. When the conditions are satisfied, this sometimes makes it possible to

use mechanisms that select efficient, envy-free expected allocations and to implement

61

Page 74: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 62

those through randomization.

However, BCKM also found that the bihierarchy condition can be a necessary con-

dition, and that can rule out some potential applications. For instance, the condition

is violated in course allocation if there are both curricular limitations (“at most one

math course”) and scheduling limitations (“only one course beginning at 10am”), and

in school choice if a school with limited capacity has both gender and racial diversity

constraints.

The goal of this paper is to expand the BCKM’s approach to a much more general

class of allocation problems by reconceptualizing the role of constraints. Our anal-

ysis shows that many more constraints can be managed if some of them are “soft”,

in the sense that they can bear small errors with relatively small costs. More pre-

cisely, the innovation of this paper is to partition the full set of constraints into a

set of hard constraints that must always be satisfied exactly, and a set of soft con-

straints that should be satisfied with high probability, at least approximately. The

main theorem of the paper identifies a rich constraint structure that is approximately

implementable, meaning that if an expected allocation satisfies all the constraints,

then it can be implemented by randomizing among pure allocations that satisfy all

the hard constraints and satisfy the soft constraints with only very small errors.

The importance of this result arises from the way it expands potential applications.

In the school choice example, the requirement that each student must be assigned

to exactly one school is (in our conception) a hard constraint that must be satisfied,

but the requirement that 50% of students in a school live in the walk zone may be

a soft constraint – if necessary, 48% will do. Allowing this flexibility is particularly

important when the constraints are inconsistent, and in other cases it provides greater

scope for accommodating individual student preferences. In particular, we employ

this result to fix the ex post unfairness of the random serial dictatorship mechanism,

while maintaining its strategy-proofness.

Page 75: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 63

3.1.1 Model and Contributions

In this paper, we analyze a general model of matching with indivisible objects. Sec-

tion 3.2 introduces the building blocks of our model. In Subsection 3.2.1, we propose

a new notion of approximate implementation. The key conceptual feature of our

model is that we partition the set of constraints into a set of hard constraints that

are inflexible and a set of soft constraints that are flexible, and we call it a hard-soft

partitioned constraint set. We say that a hard-soft partitioned constraint set is ap-

proximately implementable if for any feasible fractional assignment that satisfies both

hard and soft constraints, there exists a lottery (probability distribution) over pure

assignments such that the following three properties hold: (i) the expected value of

the lottery is equal to the fractional assignment, (ii) the outcome of the lottery sat-

isfies hard constraints, and (iii) the outcome of the lottery satisfies soft constraints

with very small errors1. We then ask: What kind of hard-soft partitioned constraint

structures are approximately implementable?

The main theoretical contribution of the paper is stated in Theorem 3.3.1. This

theorem has two key elements. First, it identifies a rich structure for soft constraints

under which the whole constraint structure is approximately implementable, given

that the structure of hard constraints is the same maximal structure introduced in

BCKM – the “bihierarchical” structure. The structure that we identify for soft con-

straints has several applications in real-life allocation problems.

The second key element of the main theorem arises from its constructive proof. We

prove Theorem 3.3.1 in the Section B.1 by constructing a novel matching algorithm

which approximately implements any given feasible fractional assignment. We invent

a matrix operation that takes a fractional assignment as its input and (randomly)

generates another assignment with fewer fractional elements as its output. By itera-

tive applications of this operation, an integral assignment is generated.2 Throughout

1This requirement will be rigorously defined in Subsection 3.2.1. To see a numerical example,consider a school with 2000 seats. Our framework can guarantee that the probability of assigningmore than 2200 students to the school is at most 0.0013. In particular, the probability of an errorin the satisfaction of soft constraints goes to 0 (with an exponential rate) as the ‘goal’ gets larger.

2It is worth mentioning that the randomized mechanism stops in polynomial time, which is animportant requirement for a practical matching algorithms in relatively large markets.

Page 76: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 64

this stochastic process, the expected value of the assignment does not change and

hard constraints are satisfied at all iterations. We then apply probabilistic concentra-

tion bounds to our randomized mechanism in order to prove that soft constraints are

satisfied with very small errors. It is worth mentioning that the previous literature on

the economic theory of implementation relies on the Birkhoff-von Neumann theorem

[17, 77] (in HZ and BM) or a generalization thereof (in BCKM). In contrast, our

random mechanism, as mentioned above, exploits a quite different set of probabilistic

tools.

We then discuss two main corollaries of the main theorem with many implications

in real-world marketplaces. Our first corollary is stated in Subsection 3.3.2. The

corollary asserts that if the structure of hard constraints is hierarchical (as opposed to

bihierarchical), then for any general structure of soft constraints, the whole constraint

structure is approximately implementable. Several applications of this corollary are

discussed in Subsection 3.4.1. For instance, in the school choice setting or the course

allocation problem, market-makers can respect a hierarchy of student-side constraints

exactly, while approximately satisfying any other constraints.

The second important corollary of Theorem 3.3.1 is presented in Subsection 3.3.3.

This corollary is about local constraint structures. A constraint is local if it relates

to a single agent or a single object. For instance, “schools s1 and s2 cannot admit

more than q students in total” is not local because it involves multiple schools and

multiple students. Our second corollary states that given a set of hard constraints

only of capacity constraints (e.g. if the number of available seats in schools cannot be

violated and all students should be assigned to exactly one school), any set of local

soft constraints can be approximately satisfied. We discuss several applications for

which such a constraint structure is natural. In the school choice setting, for example,

this corollary can guarantee that each student is assigned to exactly one school and

schools’ capacity constraints are not violated, while racial, diversity, and walk zone

constraints are satisfied, at least approximately.

In Subsection 3.3.4, we state a fully generalized version of our main theorem and

show that given a bihierarchy of hard constraints, any set of soft constraints can

be approximately satisfied, but with a weaker notion of approximate satisfaction.

Page 77: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 65

This finding shows that if the set of hard constraints has its maximal form (i.e. it is

bihierarchical) then one should either restrict the structure of soft constraints to what

we identified in the main theorem, or use a weaker notion of approximate satisfaction

for soft constraints with a more complicated structure. It is, however, worthwhile

to emphasize that even under this weaker notion, our quantitative bounds guarantee

that the probability of violating a soft constraint goes to 0 (with an exponential rate)

as the right-hand side of the constraint grows.

The tools that we exploit in constructing our randomized mechanism provide us

with a luxury that could not be afforded by previous tools, because our matching

algorithm guarantees that for any arbitrary set of “weights” for the elements of soft

constraints, they will be satisfied with only very small errors. This has multiple

interesting applications. First, the literature often considers constraints such as “the

sum of all African-American students assigned to school 1 should be at least q”, in

which all African-American students have equal weights. In our model, however, a

soft constraint can require “the sum of male African-Americans plus k times the sum

of female African-Americans should be at least q”. Second, in implementing walk-

zone requirements, one can directly incorporate each student’s distance from different

schools into the constraints; see Subsection 3.4.2.

The second application of our “generalized weights” setting is stated in Subsec-

tion 3.4.3, in which we discuss ex post properties of our randomized mechanism.

Market-makers are concerned with ex post properties of allocation mechanisms be-

cause ex ante fairness does not guarantee ex post fairness. Our main result in this

section guarantees that under our proposed randomized mechanism, ex post utilities

of the agents (or objects) are approximately equal to their ex ante utilities, and ex

post social welfare is approximately equal to the ex ante social welfare.

In Section 3.5, we employ our utility and welfare guarantees to improve two clas-

sical allocation mechanisms, namely the random serial dictatorship (RSD)3 and the

pseudo-market mechanisms. It is well-known that RSD with multi-unit demand is ex

ante fair and strategy-proof, but ex post (very) unfair. We fix the ex post unfairness

3RSD works as follows: draw a fair random ordering of the agents and then let agents select theirmost favorite bundle of objects (among those remaining) according to the realized random orderingwithout violating the constraints.

Page 78: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 66

of RSD by constructing the expected allocation associated with the RSD, and then

approximately implementing it by employing our main theorem. The mechanism

remains strategy-proof by a revelation principle argument, and our utility bounds

guarantee that the ex post allocation is “approximately” fair. Next, in ??, we employ

the competitive equilibrium from equal incomes (CEEI) mechanism introduced in

BCKM to construct a fractional assignment, and then employ our techniques to (ap-

proximately) satisfy many more constraints. Our implementation technique, more-

over, guarantees that ex post utilities are approximately equal to ex ante utilities.

Since the competitive equilibrium allocation is envy-free, it then follows that the ex

post allocation is “approximately envy-free”.4

3.1.2 Related Work

Randomization is commonplace in everyday life and has been studied in various set-

tings such as school choice, course allocation, and house allocation [1, 2, 21]. Perhaps

the most practically popular random mechanism is to draw a fair random ordering

of agents and then let the agents select their most favorite object (among those re-

maining) according to the realized random ordering without violating the constraints.

This mechanism, which is known as Random Serial Dictatorship (RSD) is a desirable

mechanism as it is strategy-proof and ex post Pareto efficient [2, 15, 29]. Nevertheless,

RSD is ex ante inefficient, ex post (highly) unfair, and cannot handle lower quotas

[19, 50, 40]. Manea (2009) [58], Kojima tand Manea (2010) [52], and Che and Kojima

(2010) [27] compare PS and RSD and analyze their connections in large markets.

The idea to construct a fractional assignment and then implementing it by a lot-

tery over pure assignments was first introduced in Hylland and Zeckhauser (1979)

[43] for cardinal utilities. Bogomolnaia and Moulin (2001) [19] constructed a mecha-

nism, the Probabilistic Serial Mechanism (PS), for ordinal utilities based on the same

4This paper has benefited from helpful comments of many people. The invaluable constantguidance of Paul Milgrom has been essential. We thank Alex Wolitzky for his constructive commentson the earlier drafts of the paper. We thank Al Roth for his valuable suggestions. We are also gratefulto Gabriel Carroll, Kareem Elnahal, Fuhito Kojima, Matthew Jackson, Bobby Pakzad-Hurson, andIlya Segal for their great suggestions, and several seminar participants for their comments andfeedback. All errors are ours.

Page 79: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 67

trick. Both papers model one-to-one matching markets with no other constraints.

Budish, Che, Kojima, and Milgrom [2013] [24] build on those two papers by consid-

ering a richer constraint structure.5 Our paper extends this literature by designing a

randomized mechanism which can accommodate a much richer class of constraints.

The approximate satisfaction of constraints has been studied in a few recent pa-

pers. Budish (2011) studies the problem of combinatorial assignment by introducing

a notion of approximate competitive equilibrium from equal incomes, which treats

course capacities as flexible constraints [21]. Ehlers et al (2011) introduce a deferred

acceptance algorithm with soft bounds in which they adjust group-specific lower and

upper bounds to achieve a fair and non-wasteful mechanism [32]. There are some

key points that separate our paper from these works. First, we propose a frame-

work which can handle “overlapping” constraints. For instance, in the school choice

setting, we can accommodate racial, gender, and walk-zone priority constraints si-

multaneously. Second, we provide a rich language for the market-maker to declare

a partitioned constraint set, which contains both flexible and inflexible constraints.

Third, our mechanism runs in polynomial time, whereas the approach introduced in

[21] is computationally hard6.

The problem of reduced-form implementation in the auction literature is also

related to our work [60, 20, 26]. In this problem, an interim allocation, which describes

the marginal probabilities of each bidder obtaining the good as a function of his type,

is constructed. Then, same as our problem, the question that is asked here is: which

interim allocations can be implemented by a lottery over feasible pure allocations?

The approximate satisfaction of constraints, however, is not studied in that literature.

The trick to approximately implement a fractional allocation has been employed

in Nguyen, Peivandi, and Vohra (2014) [63] as well, in which they model a one-

sided matching market with complementarities. Their method is both conceptually

and technically different from ours. From a conceptual perspective, their goal is to

5In a recent work, Pycia and Unver [68] study a more general structure (the Totally Unimodularor TU structure) and show that they can accommodate constraints such as strategy-proofness andenvy-freeness as linear constraints as long as they fit into the TU structure. Our approach isconceptually different from theirs since we consider flexible constraints (i.e. goals) which may notfit into the TU structure.

6It cannot be solved in polynomial time [69].

Page 80: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 68

handle complementarities in a framework with only capacity constraints, whereas

our paper is concerned with implementing generalized constraint structures. From

a technical perspective, they employ a different iterative rounding technique to first

prove the existence of the lottery, and then to construct the lottery, their paper uses

the “ellipsoid method” which implements an assignment with an expected value that

is arbitrarily close (but not exactly equal) to the original fractional assignment–unlike

our method, which employs a different technique, and implements the assignment

exactly. Their approximation bounds are additive, rather than probabilistic. The

reason that they are able to provide a small additive upper bound for the violation of

capacities in [63] is that they do not have any constraints except capacity constraints

and their framework does not include any intersecting constraints.

In addition, various rounding techniques has been developed in the computer

science literature, for instance, see [28, 74]. The techniques used in [28, 74] inspired

our design. Their rounding techniques, however, are specifically designed for the job

scheduling problem. As a result, their randomized algorithms accommodate neither

non-local soft constraints, nor (bi)hierarchical hard constraints.

3.2 Setup

Consider an environment in which a finite set of objects O has to be allocated to

a finite set of agents N . We denote the set of agent-object pairs, N × O, by E,

where each (n, o) ∈ E is an edge. Sometimes we use ‘e’ to denote edges. A pure

assignment is defined by a non-negative matrix X = [Xno] where each Xno ∈ {0, 1}denotes the amount of object o which is assigned to agent n for all (n, o) ∈ E. We

require the matrix to be binary valued to capture the indivisibility of the objects.

A block B ⊆ E is a subset of edges. A constraint S is a triple (B, qB, qB),

which is a block B associated with a vector of integer quotas (qB, qB) as the floor

and ceiling quotas on B. A structure is a subset E ⊆ 2E; i.e. a collection of blocks.

A constraint set is a set of constraints. Let q = [(qB, qB)B∈E ].

We say that X is feasible with respect to (E ,q) (or simply, with respect to Ewhen q is clearly known from the context) if

Page 81: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 69

Figure 3.1: Model framework

qB≤∑e∈B

Xe ≤ qB ∀B ∈ E . (3.2.1)

We call a block B ∈ E agent k’s capacity block when B = {Xkj|j ∈ O}.Similarly, we call a block B ∈ E an object m’s capacity block when B = {Xim|i ∈N} (See Figure 3.2). A capacity constraints is a constraint (B, q

B, qB), where

B ∈ E is a capacity block. We sometimes refer to capacity constraints of agents and

objects as row constraints and column constraints, respectively.

Figure 3.2: Capacity blocks

A fractional assignment is defined by a matrix x = [xno], where each xno ∈ [0, 1]

is the quantity of object o assigned to agent n. To distinguish between pure and

Page 82: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 70

fractional assignments, we usually use X to denote a pure assignment and x for a

fractional assignment. We use the term expected assignment to address fractional

assignments occasionally.

Given a structure E and associated quotas q, a fractional assignment matrix x is

implementable under quotas q if there exist positive numbers λ1, . . . , λK , which

sum up to one, and pure assignments X1, . . . , XK , which are feasible under q, such

that

x =K∑i=1

λiXi.

We also say that a structure E is universally implementable if, for any quotas

q = (qB, qB)B∈E , every fractional assignment matrix satisfying q is implementable

under q.

The main existing theoretical result on the implementability of a structure is

introduced in the BCKM’s paper [24], where they identify bihierarchy as a sufficient

condition for universally implementability of a structure. More precisely, a structure

H is a hierarchy if for every pair of blocks B and B′ in H, we have that B′ ⊂ B or

B ⊂ B′ or B ∩B′ = ∅. A simple hierarchy is depicted in Figure 3.3. A structure H is

a bihierarchy if there exists two hierarchies H1 and H2 such that H1 ∩H2 = ∅ and

H = H1 ∪ H2. The following theorem identifies a sufficient, and almost necessary,

condition under which a structure is universally implementable.

Figure 3.3: A hierarchy

Page 83: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 71

Theorem 3.2.1. [BCKM, 2013] If a structure H is a bihierarchy, then it is univer-

sally implementable. In addition, if H contains all agents and objects capacity blocks,

then it is universally implementable if and only if it is a bihierarchy.

3.2.1 Approximate Implementation

In many assignment problems, the involved constraints have multiple intersections

and the bihierarchy assumption fails. The following two examples clarify the bihier-

archy limitations.

Example 3.2.2. Consider a course allocation setting, where students are required to

take at least one of the two courses {f1, f2} and at most one of the finance course

f1 and the microeconomics course m. It is easy to verify that together with courses’

capacities, these constraint blocks do not form a bihierarchy.

Example 3.2.3. In the Boston School Program in 2012, fifty percent of each school

seats were set aside for walk-zone priority students. Consider a school which also has

a group-specific quota on female students. Together with the requirement that each

student should be assigned to one school, these blocks do not form a bihierarchy. (See

Figure 3.4 for an illustration)

Figure 3.4: Overlapping constraints may not fit into the bihierarchical structure. Inthe school choice setting, for instance, walk-zone priorities and gender (or racial)diversity requirements are inconsistent with the bihierarchy.

We overcome the limitations of the bihierarchical structure by reconceptualizing

the role of constraints. We show that by treating some constraints as goals rather

than inflexible constraints, we can accommodate many more constraints. In the school

choice setting, for example, a slight violation in racial or gender diversity goals is not

Page 84: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 72

infinitely costly. When such goals are modeled as inflexible constraints, it is implicitly

assumed that even a slight violation is infinitely costly, which is not often the case.

More precisely, we accommodate both flexible and inflexible constraints into the allo-

cation problem by proposing the following framework: We ask the market-maker to

partition the full set of constraints into a set of hard constraints that must be satisfied

exactly and a set of soft constraints that must be satisfied, at least approximately.

Accordingly, the structure will be partitioned into two sets: a set of hard blocks,

H, which are blocks of inflexible constraints, and a set of soft blocks, S, which are

blocks of flexible constraint. We refer to E = H ∪ S as a hard-soft partitioned

structure, or simply a partitioned structure.

Another novel generalization that we consider in this paper is that in our model,

elements of soft constraints can have a more general form compared to hard con-

straints. More precisely, for a soft block B′, we say X is feasible with respect to B′

if:

qB′≤∑e∈B′

weXe ≤ qB′

where we can take any arbitrary non-negative value and qB′

and qB′ can be any

non-negative real number. Recall that, similar to BCKM, for a hard block B, we

require we = 1 for all e ∈ B and qB

and qB are restricted to be non-negative integers.

This generalization expands the scope of practical applications of the model; this will

be discussed in Subsection 3.4.1.

Our goal in this paper is to identify structural conditions imposed on H and Sunder which E = H ∪ S is “approximately implementable”. In the following, we

rigorously define the notion of approximate implementation.

Definition 3.2.4. Given a hard-soft partitioned structure E = H ∪ S, we say E is

Approximately Implementable if for any vector of quotas q and any expected

assignment x which is feasible with respect to (E ,q), there exists a lottery (probability

distribution) over pure assignments X1, . . . , XK such that, if we denote the outcome

of the lottery by the random variable X, the following properties hold:

P1. Assignment Preservation: E[X] = x.

Page 85: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 73

P2. Exact Satisfaction of Hard Constraints: All constraints in H are satisfied.

P3. Approximate Satisfaction of Soft Constraints: For any soft block B ∈ Swith

∑e∈B xe = µ and for any ε > 0, we have

Pr(dev+ ≥ εµ) ≤ e−µε2

3 (3.2.2)

Pr(dev− ≥ εµ) ≤ e−µε2

2 (3.2.3)

where dev+ and dev− are defined as follows:

dev+ = max(0,∑e∈B

Xe − µ)

dev− = max(0, µ−

∑e∈B

Xe

)Property 1 simply states that there exists a lottery which implements x. Property

2 states that hard constraints are satisfied with no error. Property 3 is the key part

of the definition which quantifies our notion of approximation.

Property 3 is the core conceptual part of this definition. Property 3 is appealing as

it guarantees that the probability of violating constraints is exponentially decreasing

in µ and goes to 0 very rapidly. Hence, for large goals (i.e. constraints with large right-

hand or large left-hand sides), the probability of violating them by a factor greater

than ε is very small. Property 3 also guarantees that the probability of observing

very bad events (violating constraints by a large multiplicative factor ε) decreases

exponentially as ε grows. For example, in a school with 2000 seats, the probability of

admitting more than 2100 students is bounded above by 0.19, and the probability of

admitting more than 2200 students is no more than 0.0013.

3.3 The Main Theorem

We present our main theoretical result in this section. Given a partitioned structure

E = H∪S, our goal in this section is to identify structures for H and S under which

E is approximately implementable in the sense of Definition 3.2.4.

Page 86: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 74

First of all, note that Theorem 3.2.1 shows that even if S = ∅ (i.e., there are no

soft constraints), in order for E to be implementable, bihierarchy is a sufficient and

(almost) necessary condition7 for H. In other words, the bihierarchy is the weakest

condition we can impose on hard constraints. We maintain this maximal structure

and let hard blocks form a bihierarchy; i.e., we assume H = H1 ∪ H2, where H1

and H2 are two hierarchies. Then, given a bihierarchical hard structure, we aim to

identify a structural condition, if any, for soft blocks S under which E = H ∪ S is

approximately implementable.

In the corollaries of our main theorem, we show that by imposing further restric-

tions on the structure of hard blocks, one can approximately implement a fully general

set of soft constraints.

Figure 3.5: The solid blocks form a hierarchy H1. The dashed blocks are all in thedeepest level of H1. A block that, for example, contains X32 and X33 is not in thedeepest level of H1.

3.3.1 The Structure of Soft Blocks

Now we show that if H forms a bihierarchy, there exists a rich structure for the soft

blocks S under which E = H∪S is approximately implementable. To do so, we need

to define a few helpful concepts. For a block B ∈ S, we say that B is in the deepest

level of H1 if for any block C ∈ H1, either B ⊆ C or B ∩C = ∅. (See Figure 3.5 for

7We use the term “almost” because it is not a necessary condition in general, but it is necessaryin two-sided matching markets with finite capacities.

Page 87: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 75

an illustration) We also say that B ∈ S is in the deepest level of a bihierarchy

H = H1∪H2 if it is in the deepest level of either of H1 or H2. The following theorem

states our main result.

Theorem 3.3.1. [The Main Theorem] Let E = H ∪ S be a hard-soft partitioned

structure such that H is a bihierarchy and any block in S is in the deepest level of H.

Then, E is approximately implementable.

Proof Overview. We present only an overview of the proof here. The full proof

can be found in the Section B.1. The proof is constructive; that is, we propose a ran-

domized mechanism that, given a partitioned structure with properties as described

in Theorem 3.3.1, approximately implements a given feasible fractional assignment.

To do so, let us define a constraint to be tight if it is binding, and to be floating

otherwise. This definition applies to the implicit constraints 0 ≤ xe ≤ 1 for all e ∈ E.

The core of our randomized mechanism is a probabilistic operation that we de-

sign, called Operation X . We iteratively apply Operation X to the initial fractional

assignment until a pure assignment is generated. At each iteration t, which is sym-

bolically depicted in Figure 3.3.1, the fractional assignment xt is converted to xt+1 in

a way such that: (1) the number of floating constraints decreases, (2) E(xt+1|xt) = xt,

and (3) xt+1 is feasible with respect to H. The first property guarantees that after a

finite (and small) number of iterations8, the obtained assignment is pure. The second

property ensures that the resulting pure assignment is equal to the original fractional

assignment in expectation. The third property guarantees that all hard constraints

are satisfied throughout the whole process of the mechanism.

Figure 3.6: A symbolic representation of our iterative mechanism. At each iterationt, the fractional assignment xt is converted to xt+1, and this continues until a pureassignment is generated.

8Our randomized mechanism stops after at most |H|+ |E| iterations.

Page 88: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 76

In the last step, we prove that after iterative applications of Operation X , soft

constraints are approximately satisfied. Roughly speaking, we design Operation Xin such a way that it never increases (or decreases) two (or more) elements of a soft

constraint at the same iteration. Consequently, elements of each soft block become

“negatively correlated”. Negative correlation then allows us to employ probabilistic

concentration bounds to prove that soft constraints are approximately satisfied.

In Section B.1, we design Operation X and prove its desired properties.

3.3.2 Corollary 1: Fully General Soft Structure

The first corollary of the main theorem asserts that if H forms a single hierarchy

(rather than two hierarchies), then for any arbitrary set of soft constraints, E [=]H∪Sis approximately implementable.

Corollary 3.3.2 (of Theorem 3.3.1). Let E = H ∪ S be a hard-soft partitioned con-

straint set, where H is a single hierarchy; i.e., H1 = ∅ or H2 = ∅. Then, for all

S ⊆ 2E, E is approximately implementable.

Proof. By assumption, at least one of the H1 or H2 is empty. Without loss of gen-

erality, suppose H1 = ∅. We add a “dummy” constraint to H1, which contains all

the elements, i.e. the constraint 0 ≤∑

e∈E xe < ∞. Obviously, any soft constraint

block is in the deepest level of H1. Hence, by Theorem 3.3.1, E is approximately

implementable.

This corollary illustrates the trade-off between the richness of hard and soft struc-

tures. On the one hand, if H forms a bihierarchy (which is almost the richest possible

structure for H), then by Theorem 3.3.1, S needs to be in the deepest level of H. On

the other hand, if H forms a hierarchy (which is not as rich as bihierarchy), then Swould have its richest possible structure: a fully general structure.

3.3.3 Corollary 2: Local Structure

Our second corollary considers a specific structure for H, involving only capacity

blocks. Recall that an agent or object’s capacity (or an agent’s row and an object’s

Page 89: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 77

Figure 3.7: A local structure can contain any kind of blocks, as long as the blocksare subsets of columns or rows; i.e. they involve a single object and possibly multipleagents or a single agent and possibly multiple objects.

column) block is the block that involves all the elements in the row (column) cor-

responding to that agent (object) in the assignment matrix (See Figure 3.2). We

say that a structure is local if each block involves one agent with possibly multiple

objects or one object with possibly multiple agents, but not multiple agents and mul-

tiple objects at the same time. In this case, if S is a local structure, then E = H∪ Sis approximately implementable.

To fix ideas, let E be a structure such that E = H ∪ S, where H = H1 ∪ H2, H1

contains all row blocks, and H2 contains all column blocks. Also, let S contain only

sub-row or sub-column blocks, which we formally define below.9

Definition 3.3.3. The structure corresponding to an agent n ∈ N , denoted by E(n),

is the set of all blocks B ∈ E such that B can be represented as

qB≤∑j∈Z

xnj ≤ qB

for some subset Z ⊆ O. By adapting this definition in a natural way, we also define

the structure corresponding to an object o ∈ O, and denote it by E(o).

9This model of ‘local’ structures, which is a special case of our model, has been studied in [74]as well.

Page 90: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 78

Definition 3.3.4. A structure E is local if

E =⋃

v∈N∪O

E(v).

In words, a structure is local if all blocks are sub-row or sub-column blocks. Note

that for any v ∈ N ∪ O, there are no restrictions on the structure of the blocks in

E(v), e.g. they may have intersections. An example of a local structure is depicted

in Figure 3.7.

Corollary 3.3.5 (of Theorem 3.3.1). Let E = H ∪ S be a structure such that His the set of all capacity blocks and S is a local structure. Then E is approximately

implementable.

Proof. Since H is the set of all capacity blocks and S is local, any block in S is in

the deepest level of H1 or H2. The corollary follows from Theorem 3.3.1.

3.3.4 Generalized Structures

In this section, we generalize our result further and show that even if a constraint is not

in the deepest level the bihierarchy of hard constraints, it can still be approximately

satisfied with a slightly weaker notion of approximate satisfaction.

First, we need to define a new concept which, intuitively, describes the level of

complexity of the structure of a soft constraint. Consider a bihierarchy H = H1∪H2.

For a block B ∈ S, we say that B is in depth k of hierarchy H1 if B can be

partitioned into k subsets B1, B2, · · · , Bk such that all are in the deepest level of

H1, and moreover, no partitioning of B into k − 1 subsets satisfies this property

(See Figure 3.8 for an illustration). We also say that B ∈ S is in the depth k of

bihierarchy H = H1 ∪H2 if it is in depth k of either of H1 or H2.

The following theorem states that any soft constraint is approximately imple-

mentable, and the probabilistic guarantee that we provide for the error size depends

on the depth of the block corresponding to the constraint.

Theorem 3.3.6. Let E = H ∪ S be a hard-soft partitioned structure such that H is

a bihierarchy. Then, for any vector of quotas q and any expected assignment x which

Page 91: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 79

Figure 3.8: The block B = {X13, X14, X23, X24} is in depth 2 of the depicted hierar-chy, since it can be partitioned to two subsets, B1 = {X13, X23} and B2 = {X14, X24}both of which are in the deepest level of the hierarchy.

is feasible with respect to (E ,q), there exists a lottery (probability distribution) over

pure assignments X1, . . . , XK such that, if we denote the outcome of the lottery by

the random variable X, the following properties are satisfied:

P1. E[X] = x.

P2. All constraints in H are satisfied.

P3. For any ε > 0 and for any soft block B ∈ S which is in depth k of H, if∑e∈B xe = µ, then we have

Pr(dev+ ≥ εµ) ≤ k · e−µε2

3k (3.3.1)

Pr(dev− ≥ εµ) ≤ k · e−µε2

2k (3.3.2)

where dev+ and dev− are defined as follows:

dev+ = max(0,∑e∈B

Xe − µ)

dev− = max(0, µ−

∑e∈B

Xe

)We prove this theorem in ??.

Page 92: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 80

This theorem clarifies the natural trade-off between the complexity of the structure

of hard constraints and the the probabilistic guarantees that we provide for soft

constraints: The richer the former, the weaker the latter becomes. On the one hand,

we have seen in Corollary 3.3.2 that if the structure of hard blocks is hierarchical,

then any set of soft constraints is approximately implementable. Theorem 3.3.6, on

the other hand, shows that if the market-maker insists on having a bihierarchical hard

structure then he should either restrict the structure of soft blocks to the deepest level

of the bihierarchy, or he should weaken the probabilistic guarantees for satisfying the

‘goals’ that are not in the deepest level of the bihierarchy.

3.4 Applications

In this section, we discuss several applications of our results. We first start by dis-

cussing applications of our results in the school choice environment. We then intro-

duce a novel way to handle walk-zone priority quotas based on students’ distance to

a different schools. Next, we show how our framework can provide appealing ex post

guarantees for the efficiency and fairness of the final allocation. Finally, we modify

the popular Random Serial Dictatorship mechanism in multi-unit demand settings to

make in ex post approximately fair.

3.4.1 Diversity Requirements in School Choice

Consider a school choice setting, where n students are to be assigned to k schools.

Several types of constraints naturally arise in this market. A few examples are:

• Capacity constraints of schools and students. These constraints require that

each student must be assigned to exactly one school, and that each school has

limited number of available seats.

• Walk-zone priorities. Families get walk-zone priority to any school within their

walk-zone. For instance, schools in the Boston School Program are required to

assign fifty percent of their seats to students within the walk-zone. The other

half is open to everyone, including those in the walk-zone.

Page 93: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 81

• Affirmative action policies. Affirmative action is defined as “positive steps taken

to increase the representation of women and minorities in areas of employment,

education, and culture from which they have been historically excluded.”[3] One

goal of such policies is to increase diversity, and balancing out the social effects

that weaken specific groups.10 Affirmative action policies are usually imple-

mented as minimum quotas on students within a minority group.11

• Grade-based quotas. Schools may have grade-based diversity policies. For in-

stance, New York City’s Educational Option program has quotas based on test

scores [1].

The bihierarchy assumption often fails when multiple constraints such as these

exist. For example, if a school has minimum quotas on both female students and

walk-zone priority students, as in Example 3.2.3, the blocks associated with these two

constraints overlap and the bihierarchy assumption fails. In contrast, our framework

can accommodate the above-mentioned constraints even if their blocks overlap.

The importance of our result is amplified by noting that school-side constraints

are somewhat flexible, since a school may be willing to go a bit over capacity in order

to satisfy gender or racial diversity requirements. In the following, we show one way

to apply both Corollary 3.3.2 and Corollary 3.3.5 into the school choice setting.

Corollary 3.3.2 in school choice setting: Let H be a single hierarchy, which

includes all the blocks of student-side inflexible constraints. A natural student-side

hard constraint is that each student should be assigned to exactly one school. Hence,

one can define H to be the set of all student-side capacity blocks, where qB

= qB =

1 for all B ∈ H. Now, by Corollary 3.3.2, any general set of constraints can be

approximately satisfied.

It is worth emphasizing that Corollary 3.3.2 allows for multiple schools to be

involved in the same constraint (see Figure 3.9 for an illustration), which is important

in some applications. For instance, in New York City public schools, a considerable

10 Another argument in favor of affirmative action policies is that they increase structural inte-gration, which “serves the ideal of equal opportunity.”[45]

11See [39, 4, 53] for detailed theoretical analysis of affirmative action policies.

Page 94: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 82

Figure 3.9: In the school choice problem, when hard constraints require each studentto be assigned to exactly one school, the set of hard blocks forms a single hierarchy.Consequently, by Corollary 3.3.2, any set of soft constraints can be approximatelyimplemented.

fraction of these schools are co-located12. Consequently, they have joint quotas such

as an upper quota on the number of students who can be inside a school’s building at

any point in time. By Corollary Corollary 3.3.2, these “joint” blocks can be included

in S and our mechanism can satisfy them with very small errors.

Corollary 3.3.5 in school choice setting: If a violation in both row and column

constraints is very costly, then Corollary 3.3.5 can be more useful than Corollary 3.3.2.

In this case, we define H to be the set of all row and column blocks. Obviously, H is

a bihierarchy. Now we can apply Corollary 3.3.5 to guarantee that for any local S,

E = H ∪ S is approximately implementable.

3.4.2 Distance-based Walk-zone Priorities

Policy-makers in the school choice systems are often interested in prioritizing stu-

dents based on their distance. A typical way to implement walk-zone priorities is

partitioning the city into artificial zones and imposing lower quotas on the number of

students who live in the same zone as schools. This method treats students who live

12See http://www.nyccharterschools.org/sites/default/files/resources/Facts_

Colocation.pdf

Page 95: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 83

just inside and outside of each zone’s border very differently, as it assigns a weight 1

to students who live inside and a weight 0 to students who live outside.

A more natural way to include distance-based priorities in the school choice prob-

lem is to assign weights to each student-school pair (s, h) based on student s’s distance

to school h. More formally, one can impose qB≤∑

(s,h)∈B d(s,h)x(s,h) ≤ qB as a soft

constraint, where d(s,h) is either the distance of student s from school h, or any other

“penalty function”.13 It is very straightforward to see that this can be accommodated

into our framework, as soft constraints can have real-valued coefficients.

3.4.3 Ex post Guarantees

In practice, the indivisibility of the objects and (possibly) the lack of monetary trans-

fers make the allocation of resources likely to be asymmetric and unfair. One of the

main motivations for randomization is to restore fairness by constructing an ex ante

fair allocation. Nevertheless, given a fair fractional allocation, there could be very

large discrepancies in realized utilities.

Our next application concerns ex post properties of our proposed randomized

mechanism. We show that the mechanism approximately maintains the fairness and

efficiency properties of the original fractional assignment. Then in the next section,

we employ those guarantees to refine the classical random serial dictatorship (RSD)

mechanism. In particular, we show that by adding utility goals (as soft constraints)

to the RSD mechanism, we can fix its ex post unfairness, while it remains strategy-

proof. We also employ the same guarantees to refine the “pseudo-market mechanism”

(introduced in HZ and BCKM) in ??. Our expansion helps us to manage a richer set

of constraints, and to provide ex post guarantees for the utilities.

The following definition captures our main notion of our approximate guarantees

for utilities and welfare.

Definition 3.4.1. A random variable x is approximately lower-bounded by a con-

stant µ (denoted by µ . x) if the following two conditions hold:

13d(s,h) can incorporate considerations such as how accessible school h is for student s by publictransportation, as well.

Page 96: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 84

1. E(x) = µ

2. Pr(x ≤ µ(1− ε)

)≤ e−µε

2/2

In words, a random variable x is approximately lower-bounded by a constant µ if

x is equal to µ in expectation and the probability of x being less than µ(1− ε) is very

small, for any ε > 0. It is clear that if µ = 0, then any random variable for which

E(x) = 0 is approximately lower-bounded by 0. As will be clear soon, this definition

is particularly interesting for larger values of µ.

Our general framework provides a simple way to get lower bounds for ex post

utilities. We show that if the ex ante assignment is “fair” (e.g., if it respects the

equal treatment of equals or if it is envy-free), then the ex post allocation of our

mechanism remains fair, at least approximately.

To fix ideas, consider an environment where the set of hard blocks H forms a

single hierarchy14. We impose no restriction on the soft structure S. We assume

that utilities are Von Neumann-Morgenstern utilities and are additive; that is, there

exist values (uik)k∈O such that an agent i’s utility from an allocation x, with i’th

row xi = (xi1, xi2, · · · , xi|O|), is vix =∑|O|

k=1 xikuik, where, without loss of generality,

uik ∈ [0, 1] for all i, k. Also, let W (x) =∑|N |

i=1 vix be the social welfare associated

with allocation x.In the following theorem, we guarantee that the ex post utility of

any agent i and the ex post social welfare are approximately lower-bounded by vix

and W (x), respectively.

Theorem 3.4.2 (Utility and Welfare Bounds). Any feasible fractional assignment x

is approximately implementable in such a way that for each i, if X is the outcome of

the lottery, then vix . viX and W (x) . W (X).

Proof. The idea of the proof is to add the following artificial constraints for the social

welfare and for the utility of agents to the soft constraint set:

|O|∑k=1

Xikuik ≥ vix ∀i ∈ N,

14This assumption is for expositional clarity. In fact, it is enough if every all-row blocks are in thedeepest level of H.

Page 97: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 85

|N |∑i=1

|O|∑k=1

xikuik ≥ W (x).

Since hard blocks form a single hierarchy, the blocks associated with these new

constraints are in the deepest level of the empty hierarchy of the hard structure. The

proof follows immediately from Theorem 3.3.1.

Remark 3.4.3. Theorem 3.4.2 provides lower bounds that are interesting when vix

is relatively large, which is the case when each agent is (in expectation) allocated

to several objects (since uik’s are normalized to be in [0, 1]). Therefore, in settings

such as school choice, our bounds are not practically interesting for providing fairness

among students. In fact, it is clear that because each student is assigned to a single

school, guaranteeing an envy-free ex post allocation is nearly impossible. Nevertheless,

our bounds give strong ex post guarantees for schools, or in general, for when agents

(objects) are assigned to a large number of objects (agents). Note that we can define

the “utility of the schools” similar to that of students; that is, vxj =∑|N |

k=1 xkjukj is

the utility of object j from assignment x, where (ukj)k∈N is the value of agent k for

object j. In addition, since W (x) is the sum of all utilities of the agents and thus

often has a large value relative to individual agents’ utilities, the bound provided in

Theorem 3.4.2 for W (X) is a strong probabilistic bound.

3.5 Fixing Random Serial Dictatorship

Random serial dictatorship is one of the most practically popular mechanisms for

the allocation of indivisible objects. RSD works as follows: The planner draws an

ordering of agents uniformly at random and then lets the agents select their favorite

bundle of objects (among those remaining without violating the constraints) one by

one according to the realized random ordering. In Subsection 3.1.2, we discussed

that although this mechanism is strategy-proof and ex ante fair15, it is ex ante in-

efficient and ex post (very) unfair. Che and Kojima (2010) [27] shows that the ex

ante inefficiency disappears in large markets. Ex post unfairness, however, remains

15The RSD mechanism satisfies the ‘equal treatment of equals’ and the ‘SD envy-freeness’ criteria.

Page 98: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 86

a serious issue for the RSD mechanism since agents with best priorities can choose

most favorite items. The following example clarifies this problem.

Example 3.5.1. Consider a course allocation setting, where there are two students,

s1 and s2 each planning to take two coerces. Suppose there are four different courses,

c1, c2, c3 and c4, each with capacity of 1. Let us assume that both students prefer c1

and c2 the most.

Now if we run the RSD mechanism and choose one of the two random orderings

with equal probability. This mechanism is obviously ex ante fair, in the sense that it

is treating students in a symmetric fashion. Yet, the student with the best priority

will take c1 and c2 and the other student has no choice but to take c3 and c4, which

is ex post very unfair.

Consider the same model as in Subsection 3.4.3 in which agents have additive

utilities over all subsets of objects and suppose all constraints’ lower quotas are set to

zero. For any k ∈ {1, 2, · · · , N !}, let πk be a priority ordering of agents. We introduce

a new mechanism, the Approximate Random Serial Dictatorship (ARSD) mechanism,

and prove that this mechanism is strategy-proof, ex ante fair, and ex post approxi-

mately fair. The idea is simple: the RSD mechanism induces an ex ante assignment,

which is potentially fractional. We ask for agents’ preferences, construct the expected

assignment induced by the RSD, and then employ our randomized mechanism based

on the Operation X to implement it. We formally define ARSD below.

The Approximate Random Serial Dictatorship Mechanism (ARSD)

1. Agents report their ordinal preferences over individual objects.

2. Construct the expected random serial dictatorship assignment xrsd in the fol-

lowing way: run the serial dictatorship mechanism, with prioritizing agents

according to πk, and without violating any of the (hard and soft) constraints.

Denote the resulting pure assignment by Xk. Let xrsd = 1N !

∑N !1 Xk.

3. The mechanism approximately implements xrsd.

Page 99: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 87

The following theorem shows that ARSD is strategy-proof, and in contrast to

the standard RSD, the realized utilities of the agents is approximately equal to their

expected utilities.

Theorem 3.5.2. The ARSD mechanism is strategy-proof and respects equal treatment

of equals16. Moreover, ex post utilities of the agents are approximately lower-bounded

by their ex ante utilities.

Proof. Since xrsd is the fractional assignment induced by the RSD and this mechanism

is strategy-proof, a straightforward argument based on the revelation principle clari-

fies that ARSD is also strategy-proof since it implements xrsd. Note that, in expecta-

tion, we are exactly implementing xrsd and there are no approximations in this step.

The second part of the theorem, that ex post utilities of the agents are approximately

lower-bounded by their ex post utilities, follows immediately from Theorem 3.4.2.

3.6 Conclusion

We study the mechanism design problem of allocating indivisible objects to agents

in a setting where cash transfers are precluded and the final allocation needs to

satisfy some constraints. One efficient and ex ante fair solution to this problem is the

“expected assignment” method, in which the mechanism first finds a feasible fractional

assignment, and then implements that fractional assignment by running a lottery over

feasible pure assignment. Previous literature have characterized a maximal ‘constraint

structure’ that can be accommodated into the expected assignment method. Such

structure rules out many real-world applications. We show that by reconceptualizing

the role of constraints and treating some of them as goals rather than hard constraints,

one can accommodate many more constraints.

The main theorem of the paper identifies a rich constraint structure that is ap-

proximately implementable, meaning that any expected assignment that satisfies both

hard constraints and soft constraints (i.e. goals) can be implemented by a lottery over

16An allocation mechanism respects equal treatment of equals if agents with the same utilities overbundles of objects have the same allocations.

Page 100: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

CHAPTER 3. RANDOM ALLOCATION MECHANISMS 88

nearby pure assignments in a way such that hard constraints can be exactly satisfied

and goals can be satisfied with only very small errors. As a corollary of the main the-

orem, we show that if the structure of hard constraints is hierarchical, then any set of

goals can be approximately satisfied. This allows us to significantly expand potential

applications of the expected assignment method. For instance, in the school choice

setting, we can accommodate racial, gender, and walk-zone priority constraints at the

same time.

The key technical novelty of this study is the randomized mechanism that we

design in order to implement a fractional assignment. We quantify the violations

in soft constraints by applying probabilistic concentration bounds. This framework

helps us to preserve some desirable properties of the expected allocation in the ex

post allocation. For instance, an envy-free or efficient expected allocation remains

approximately envy-free and efficient ex post. By applying the same technique, we

introduce a new way to implement walk-zone requirements in which, rather than

setting a quota on students from a specific ‘walk-zone’, we define a penalty function

based on each students’ distance to each school. In this way, students who live just

inside and outside of a specific walk-zone are not treated differently.

We exploit the same technique to modify the random serial dictatorship mech-

anism by making it ex post (approximately) fair. This is done by constructing the

ex ante assignment associated with RSD, and implementing it by our randomized

mechanism.

We are hopeful that the proposed framework for partitioning constraints and

the randomized mechanism we designed will pave the way for designing improved

allocation mechanisms in practice.

Page 101: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

Appendix A

Missing Proofs From Chapter 2

A.1 Auxiliary Inequalities

In this section we prove several inequalities that are used throughout the paper. For

any a, b ≥ 0,

∞∑i=a

e−bi2

=∞∑i=0

e−b(i+a)2 ≤∞∑i=0

e−ba2−2iab = e−ba

2∞∑i=0

(e−2ab)i

=e−ba

2

1− e−2ab≤ e−ba

2

min{ab, 1/2}.(A.1.1)

The last inequality can be proved as follows: If 2ab ≤ 1, then e−2ab ≤ ab, otherwise

e−2ab ≤ 1/2.

For any a, b ≥ 0,

∞∑i=a

(i− 1)e−bi2 ≤

∫ ∞a−1

xe−bx2

dx =−1

2be−bx

2 |∞a−1=e−b(a−1)2

2b. (A.1.2)

For any a ≥ 0 and 0 ≤ b ≤ 1,

∞∑i=a

ie−bi = e−ba∞∑i=0

(i+ a)e−bi = e−ba( a

1− e−b+

1

(1− e−b)

)≤ e−ba(2ba+ 4)

b2.(A.1.3)

89

Page 102: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 90

The Bernoulli inequality states that for any x ≤ 1, and any n ≥ 1,

(1− x)n ≥ 1− xn. (A.1.4)

Here, we prove for integer n. The above equation can be proved by a simple induction

on n. It trivially holds for n = 0. Assuming it holds for n we can write,

(1− x)n+1 = (1− x)(1− x)n ≥ (1− x)(1− xn) = 1− x(n+ 1) + x2n ≥ 1− x(n+ 1).

A.2 Proof of Theorem 2.4.2

A.2.1 Stationary Distributions: Existence and Uniqueness

In this part we show that the Markov Chain on Zt has a unique stationary distribution

under each of the Greedy and Patient algorithms. By Proposition 2.4.1, Zt is a Markov

chain on the non-negative integers (N ) that starts from state zero.

First, we show that the Markov Chain is irreducible. First note that every state

i > 0 is reachable from state 0 with a non-zero probability. It is sufficient that i

agents arrive at the market with no acceptable bilateral transactions. On the other

hand, state 0 is reachable from any i > 0 with a non-zero probability. It is sufficient

that all of the i agents in the pool become critical and no new agents arrive at the

market. So Zt is an irreducible Markov Chain.

Therefore, by the ergodic theorem it has a unique stationary distribution if and

only if it has a positive recurrent state [64, Theorem 3.8.1]. In the rest of the proof

we show that state 0 is positive recurrent. By (2.3.1) Zt = 0 if Zt = 0. So, it is

sufficient to show

E[inf{t ≥ T1 : Zt = 0}|Zt0 = 0

]<∞. (A.2.1)

It follows that Zt is just a continuous time birth-death process onN with the following

Page 103: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 91

transition rates,

rk→k+1 = m and rk→k−1 := k (A.2.2)

It is well known (see e.g. [38, p. 249-250]) that Zt has a stationary distribution if and

only if∞∑k=1

r0→1r1→2 . . . rk−1→k

r1→0 . . . rk→k−1

<∞.

Using (A.2.2) we have

∞∑k=1

r0→1r1→2 . . . rk−1→k

r1→0 . . . rk→k−1

=∞∑k=1

mk

k!= em − 1 <∞

Therefore, Zt has a stationary distribution. The ergodic theorem [64, Theorem 3.8.1]

entails that every state in the support of the stationary distribution is positive recur-

rent. Thus, state 0 is positive recurrent under Zt. This proves (A.2.1), so Zt is an

ergodic Markov Chain.

A.2.2 Upper bounding the Mixing Times

In this part we complete the proof of Theorem 2.4.2 and provide an upper bound the

mixing of Markov Chain Zt for the Greedy and Patient algorithms. Let π(.) be the

stationary distribution of the Markov Chain.

A.2.3 Mixing time of the Greedy Algorithm

We use the coupling technique (see [56, Chapter 5]) to get an upper bound for the

mixing time of the Greedy algorithm. Suppose we have two Markov Chains Yt, Zt

(with different starting distributions) each running the Greedy algorithm. We define

a joint Markov Chain (Yt, Zt)∞t=0 with the property that projecting on either of Yt and

Zt we see the stochastic process of Greedy algorithm, and that they stay together at

Page 104: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 92

all times after their first simultaneous visit to a single state, i.e.,

if Yt0 = Zt0 , then Yt = Zt for t ≥ t0.

Next we define the joint chain. We define this chain such that for any t ≥ t0,

|Yt − Zt| ≤ |Yt0 − Zt0|. Assume that Yt0 = y, Zt0 = z at some time t0 ≥ 0, for

y, z ∈ N . Without loss of generality assume y < z (note that if y = z there is

nothing to define). Consider any arbitrary labeling of the agents in the first pool

with a1, . . . , ay, and in the second pool with b1, . . . , bz. Define z + 1 independent

exponential clocks such that the first z clocks have rate 1, and the last one has rate

m. If the i-th clock ticks for 1 ≤ i ≤ y, then both of ai and bi become critical (recall

that in the Greedy algorithm the critical agent leaves the market right away). If

y < i ≤ z, then bi becomes critical, and if i = z+1 new agents ay+1, bz+1 arrive to the

markets. In the latter case we need to draw edges between the new agents and those

currently in the pool. We use z independent coins each with parameter d/m. We use

the first y coins to decide simultaneously on the potential transactions (ai, ay+1) and

(bi, bz+1) for 1 ≤ i ≤ y, and the last z− y coins for the rest. This implies that for any

1 ≤ i ≤ y, (ai, ay+1) is an acceptable transaction iff (bi, bz+1) is acceptable. Observe

that if ay+1 has at least one acceptable transaction then so has bz+1 but the converse

does not necessarily hold.

It follows from the above construction that |Yt − Zt| is a non-increasing function

of t. Furthermore, this value decreases when either of the agents by+1, . . . , bz become

critical (we note that this value may also decrease when a new agent arrives but we

do not exploit this situation here). Now suppose |Y0 − Z0| = k. It follows that the

two chains arrive to the same state when all of the k agents that are not in common

become critical. This has the same distribution as the maximum of k independent

exponential random variables with rate 1. Let Ek be a random variable that is the

maximum of k independent exponentials of rate 1. For any t ≥ 0,

P [Zt 6= Yt] ≤ P[E|Y0−Z0| ≥ t

]= 1− (1− e−t)|Y0−Z0|.

Now, we are ready to bound the mixing time of the Greedy algorithm. Let zt(.)

Page 105: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 93

be the distribution of the pool size at time t when there is no agent in the pool at

time 0 and let π(.) be the stationary distribution. Fix 0 < ε < 1/4, and let β ≥ 0

be a parameter that we fix later. Let (Yt, Zt) be the joint Markov chain that we

constructed above where Yt is started at the stationary distribution and Zt is started

at state zero. Then,

‖zt − π‖TV ≤ P [Yt 6= Zt] =∞∑i=0

π(i)P [Yt 6= Zt|Y0 = i]

≤∞∑i=0

π(i)P [Ei ≥ t]

≤βm/d∑i=0

(1− (1− e−t)βm/d) +∞∑

i=βm/d

π(i)

≤ β2m2

d2e−t + 2e−m(β−1)2/2d

where the last inequality follows by equation (A.1.4) and Proposition 2.5.5. Letting

β = 1 +√

2 log(2/ε) and t = 2 log(βm/d) · log(2/ε) we get ‖zt − π‖TV ≤ ε, which

proves the theorem.

A.2.4 Mixing time of the Patient Algorithm

It remains to bound the mixing time of the Patient algorithm. The construction of

the joint Markov Chain is very similar to the above construction except some caveats.

Again, suppose Yt0 = y and Zt0 = z for y, z ∈ N and t0 ≥ 0 and that y < z. Let

a1, . . . , ay and b1, . . . , bz be a labeling of the agents. We consider two cases.

Case 1) z > y+ 1. In this case the construction is essentially the same as the Greedy

algorithm. The only difference is that we toss random coins to decide on

acceptable bilateral transactions at the time that an agent becomes critical

(and not at the time of arrival). It follows that when new agents arrive the

size of each of the pools increase by 1 (so the difference remains unchanged).

If any of the agents by+1, . . . , bz become critical then the size of second pool

decrease by 1 or 2 and so is the difference of the pool sizes.

Page 106: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 94

Case 2) z = y+1. In this case we define a slightly different coupling. This is because,

for some parameters and starting values, the Markov chains may not visit the

same state for a long time for the coupling defined in Case 1 . If z � m/d,

then with a high probability any critical agent gets matched. Therefore, the

magnitude of |Zt − Yt| does not quickly decrease (for a concrete example,

consider the case where d = m, y = m/2 and z = m/2 + 1). Therefore, in

this case we change the coupling. We use z+2 independent clocks where the

first z are the same as before, i.e., they have rate 1 and when the i-th clock

ticks bi (and ai if i ≤ y) become critical. The last two clocks have rate m,

when the z + 1-st clock ticks a new agent arrives to the first pool and when

z + 2-nd one ticks a new agent arrives to the second pool.

Let |Y0 − Z0| = k. By the above construction |Yt − Zt| is a decreasing function of

t unless |Yt − Zt| = 1. In the latter case this difference goes to zero if a new agent

arrives to the smaller pool and it increases if a new agent arrives to the bigger pool.

Let τ be the first time t where |Yt − Zt| = 1. Similar to the Greedy algorithm, the

event |Yt − Zt| = 1 occurs if the second to maximum of k independent exponential

random variables with rate 1 is at most t. Therefore,

P [τ ≤ t] ≤ P [Ek ≤ t] ≤ (1− e−t)k

Now, suppose t ≥ τ ; we need to bound the time it takes to make the difference

zero. First, note that after time τ the difference is never more than 2. Let Xt be

the (continuous time) Markov Chain illustrated in Figure A.1 and suppose X0 = 1.

Using m ≥ 1, it is easy to see that if Xt = 0 for some t ≥ 0, then |Yt+τ − Zt+τ | = 0

(but the converse is not necessarily true). It is a simple exercise that for t ≥ 8,

P [Xt 6= 0] =∞∑k=0

e−ttk

k!2−k/2 ≤

t/4∑k=0

e−ttk

k!+ 2−t/8 ≤ 2−t/4 + 2−t/8. (A.2.3)

Now, we are ready to upper-bound the mixing time of the Patient algorithm. Let

zt(.) be the distribution of the pool size at time t where there is no agent at time 0,

and let π(.) be the stationary distribution. Fix ε > 0, and let β ≥ 2 be a parameter

Page 107: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 95

0 1 2

1

1

1

Figure A.1: A three state Markov Chain used for analyzing the mixing time of thePatient algorithm.

that we fix later. Let (Yt, Zt) be the joint chain that we constructed above where Yt

is started at the stationary distribution and Zt is started at state zero.

‖zt − π‖TV ≤ P [Zt 6= Yt] ≤ P [τ ≤ t/2] + P [Xt ≤ t/2]

≤∞∑i=0

π(i)P [τ ≤ t/2|Y0 = i] + 2−t/8+1

≤ 2−t/8+1 +∞∑i=0

π(i)(1− (1− e−t/2)i)

≤ 2−t/8+1 +

βm∑i=0

(it/2) +∞∑

i=βm

π(i)

≤ 2−t/8+1 +β2m2t

2+ 6e−(β−1)m/3.

where in the second to last equation we used equation (A.1.4) and in the last equation

we used Proposition 2.5.9. Letting β = 10 and t = 8 log(m) log(4/ε) implies that

‖zt − π‖TV ≤ ε which proves Theorem 2.4.2.

Page 108: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 96

A.3 Proofs from Section 2.5

A.3.1 Proof of Lemma 2.5.4

Proof. By Proposition 2.3.3, E [Zt] ≤ m for all t, so

L(Greedy) =1

m · TE[∫ T

t=0

Ztdt

]=

1

mT

∫ T

t=0

E [Zt] dt

≤ 1

mTm · τmix(ε) +

1

mT

∫ T

t=τmix(ε)

E [Zt] dt (A.3.1)

where the second equality uses the linearity of expectation. Let Zt be the number of

agents in the pool at time t when we do not match any pair of agents. By (2.3.1),

P [Zt ≥ i] ≤ P[Zt ≥ i

].

Therefore, for t ≥ τmix(ε),

E [Zt] =∞∑i=1

P [Zt ≥ i] ≤6m∑i=0

P [Zt ≥ i] +∞∑

i=6m+1

P[Zt ≥ i

]≤

6m∑i=0

(PZ∼π [Z ≥ i] + ε) +∞∑

i=6m+1

∞∑`=i

m`

`!

≤ EZ∼π [Z] + ε6m+∞∑

i=6m+1

2mi

i!

≤ EZ∼π [Z] + ε6m+4m6m

(6m)!(A.3.2)

≤ EZ∼π [Z] + ε6m+ 2−6m. (A.3.3)

where the second inequality uses P[Zt = `

]≤ m`/`! of Proposition 2.3.3 and the

last inequality follows by the Stirling’s approximation1 of (6m)!. Putting (A.3.1) and

1Stirling’s approximation states that

n! ≥√

2πn(ne

)n.

Page 109: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 97

(A.3.3) proves the lemma.

A.3.2 Proof of Lemma 2.5.7

Proof. Let Let ∆ ≥ 0 be a parameter that we fix later. We have,

EZ∼π [Z] ≤ k∗ + ∆ +∞∑

i=k∗+∆+1

iπ(i). (A.3.4)

By equation (2.5.6),

∞∑i=k∗+∆+1

iπ(i) =∞∑

i=∆+1

e−d(i−1)2/2m(i+ k∗)

=∞∑i=∆

e−di2/2m(i− 1) +

∞∑i=∆

e−di2/2m(k∗ + 2)

≤ e−d(∆−1)2/2m

d/m+ (k∗ + 2)

e−d∆2/2m

min{1/2, d∆/2m}, (A.3.5)

where in the last step we used equations (A.1.1) and (A.1.2). Letting ∆ := 1 +

2√m/d log(m/d) in the above equation, the right hand side is at most 1. The lemma

follows from (A.3.4) and the above equation.

A.3.3 Proof of Lemma 2.5.8

Proof. By linearity of expectation,

L(Patient) =1

m · TE[∫ T

t=0

Zt(1− d/m)Zt−1dt

]=

1

m · T

∫ T

t=0

E[Zt(1− d/m)Zt−1

]dt.

Since for any t ≥ 0, E[Zt(1− d/m)Zt−1

]≤ E [Zt] ≤ E

[Zt

]≤ m, we can write

L(Patient) ≤ τmix(ε)

T+

1

m · T

∫ T

t=τmix(ε)

∞∑i=0

(π(i) + ε)i(1− d/m)i−1dt

≤ τmix(ε)

T+

EZ∼π[Z(1− d/m)Z−1

]m

+εm

d2

Page 110: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 98

where the last inequality uses the identity∑∞

i=0 i(1− d/m)i−1 = m2/d2.

A.3.4 Proof of Proposition 2.5.9

Let us first rewrite what we derived in the proof overview of this proposition in the

main text. The balance equations of the Markov chain associated with the Patient

algorithm can be written as follows by replacing transition probabilities from (2.5.7),

(2.5.8), and (2.5.9) in (2.5.10):

mπ(k) = (k + 1)π(k + 1) + (k + 2)(

1−(

1− d

m

)k+1)π(k + 2) (A.3.6)

Now define a continous f : R→ R as follows,

f(x) := m− (x+ 1)− (x+ 2)(1− (1− d/m)x+1). (A.3.7)

It follows that

f(m− 1) ≤ 0, f(m/2− 2) > 0,

which means that f(.) has a root k∗ such that m/2− 2 < k∗ < m. In the rest of the

proof we show that the states that are far from k∗ have very small probability in the

stationary distribution

In order to complete the proof of Proposition 2.5.9, we first prove the following

useful lemma.

Lemma A.3.1. For any integer k ≤ k∗,

π(k)

max{π(k + 1), π(k + 2)}≤ e−(k∗−k)/m.

Similarly, for any integer k ≥ k∗, min{π(k+1),π(k+2)}π(k)

≤ e−(k−k∗)/(m+k−k∗).

Page 111: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 99

Proof. For k ≤ k∗, by equation (A.3.6),

π(k)

max{π(k + 1), π(k + 2)}≤ (k + 1) + (k + 2)(1− (1− d/m)k+1)

m

≤ (k − k∗) + (k∗ + 1) + (k∗ + 2)(1− (1− d/m)k∗+1)

m

= 1− k∗ − km

≤ e−(k∗−k)/m,

where the last equality follows by the definition of k∗ and the last inequality uses

1− x ≤ e−x. The second conclusion can be proved similarly. For k ≥ k∗,

min{π(k + 1), π(k + 2)}π(k)

≤ m

(k + 1) + (k + 2)(1− (1− d/m)k+1)

≤ m

(k − k∗) + (k∗ + 1) + (k∗ + 2)(1− (1− d/m)k∗+1)

=m

m+ k − k∗= 1− k − k∗

m+ k − k∗≤ e−(k−k∗)/(m+k−k∗).

where the equality follows by the definition of k∗.

Now, we use the above claim to upper-bound π(k) for values k that are far from

k∗. First, fix k ≤ k∗. Let n0, n1, . . . be sequence of integers defined as follows: n0 = k,

and ni+1 := arg max{π(ni + 1), π(ni + 2)} for i ≥ 1. It follows that,

π(k) ≤∏

i:ni≤k∗

π(ni)

π(ni+1)≤ E

(−∑

i:ni≤k∗

k∗ − nim

)(A.3.8)

≤ E(−

(k∗−k)/2∑i=0

2i

m

)≤ e−(k∗−k)2/4m, (A.3.9)

where the second to last inequality uses |ni − ni−1| ≤ 2.

Now, fix k ≥ k∗ + 2. In this case we construct the following sequence of integers,

n0 = bk∗ + 2c, and ni+1 := arg min{π(ni + 1), π(ni + 2)} for i ≥ 1. Let nj be the

largest number in the sequence that is at most k (observe that nj = k− 1 or nj = k).

Page 112: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 100

We upper-bound π(k) by upper-bounding π(nj),

π(k) ≤ m · π(nj)

k≤ 2

j−1∏i=0

π(ni)

π(ni+1)≤ 2E

(−

j−1∑i=0

ni − k∗

m+ ni − k∗)

≤ 2E(−

(j−1)/2∑i=0

2i

m+ k − k∗)≤ 2E

(−(k − k∗ − 1)2

4(m+ k − k∗)

).

(A.3.10)

To see the first inequality note that if nj = k, then there is nothing to show; otherwise

we have nj = k−1. In this case by equation (A.3.6), mπ(k−1) ≥ kπ(k). The second

to last inequality uses the fact that |ni − ni+1| ≤ 2.

We are almost done. The proposition follows from (??) and (A.3.9). First, for

σ ≥ 1, let ∆ = σ√

4m, then by equation (A.1.1)

k∗−∆∑i=0

π(i) ≤∞∑i=∆

e−i2/4m ≤ e−∆2/4m

min{1/2,∆/4m}≤ 2√me−σ

2

.

Similarly,

∞∑i=k∗+∆

π(i) ≤ 2∞∑

i=∆+1

e−(i−1)2/4(i+m) ≤ 2∞∑i=∆

e−i/(4+√

4m/σ)

≤ 2e−∆/(4+

√4m/σ)

1− e−1/(4+√

4m)≤ 8√me

−σ2√m2σ+√m

This completes the proof of Proposition 2.5.9.

A.3.5 Proof of Lemma 2.5.10

Proof. Let ∆ := 3√m log(m), and let β := maxz∈[m/2−∆,m+∆] z(1− d/m)z.

EZ∼π[Z(1− d/m)Z

]≤ β +

m/2−∆−1∑i=0

m

2π(i)(1− d/m)i (A.3.11)

+∞∑

i=m+∆

iπ(i)(1− d/m)m (A.3.12)

Page 113: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 101

We upper bound each of the terms in the right hand side separately. We start with

upper bounding β. Let ∆′ := 4(log(2m) + 1)∆.

β ≤ maxz∈[m/2,m]

z(1− d/m)z +m/2(1− d/m)m/2((1− d/m)−∆ − 1) + (1− d/m)m∆

≤ maxz∈[m/2,m]

(z + ∆′ + ∆)(1− d/m)z + 1. (A.3.13)

To see the last inequality we consider two cases. If (1 − d/m)−∆ ≤ 1 + ∆′/m then

the inequality obviously holds. Otherwise, (assuming ∆′ ≤ m),

(1− d/m)∆ ≤ 1

1 + ∆′/m≤ 1−∆′/2m,

By the definition of β,

β ≤ (m+ ∆)(1− d/m)m/2−∆ ≤ 2m(1−∆′/2m)m/2∆−1 ≤ 2me∆′/4∆−1 ≤ 1.

It remains to upper bound the second and the third term in (A.3.12). We start

with the second term. By Proposition 2.5.9,

m/2−∆−1∑i=0

π(i) ≤ 1

m3/2. (A.3.14)

where we used equation (A.1.1). On the other hand, by equation (??)

∞∑i=m+∆

iπ(i) ≤ e−∆/(2+√m)(

m

1− e−1/(2+√m)

+2∆ + 4

1/(2 +√m)2

) ≤ 1√m. (A.3.15)

where we used equation (A.1.3).

The lemma follows from (A.3.12), (A.3.13), (A.3.14) and (A.3.15).

Page 114: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 102

A.4 Proofs from Section 2.6

A.4.1 Proof of Lemma 2.6.3

Let ∆ := 3√m log(m). Let E be the event that Zt ∈ [k∗−∆, k∗+ ∆]. First, we show

the following inequality and then we upper-bound E[Xt|E

]P[E].

E [Xt] (1− qk∗+∆)− E[Xt|E

]P[E]≤ E

[Xt(1− qZt)

](A.4.1)

≤ E [Xt] (1− qk∗−∆) + E[Xt|E

]P[E]

(A.4.2)

We prove the right inequality and the left can be proved similarly.

By definition of expectation,

E[Xt(1− qZt)

]= E

[Xt(1− qZt)|E

]· P [E ] + E

[Xt(1− qZt)|E

]· P[E]

≤ E [Xt|E ] (1− qk∗−∆) + E[Xt(1− qZt)|E

]· P[E]

Now, for any random variable X and any event E we have E [X|E ] · P [E ] = E [X] −E[X|E

]· P[E]. Therefore,

E[Xt(1− qZt)

]≤ (1− qk∗−∆)(E [Xt]− E

[Xt|E

]· P[E]) + E

[Xt(1− qZt)|E

]· P[E]

≤ E [Xt] (1− qk∗−∆) + E[Xt|E

]P[E]

where we simply used the non-negativity of Xt and that (1− qk∗−∆) ≤ 1. This proves

the right inequality of (??). The left inequality can be proved similarly.

It remains to upper-bound E[Xt|E

]P[E]. Let π(.) be the stationary distribution

of the Markov Chain Zt. Since by definition of Xt, Xt ≤ Zt with probability 1,

E[Xt|E

]P[E]≤ E

[Zt|E

]P[E]

≤k∗−∆∑i=0

i(π(i) + ε) +6m∑

i=k∗+∆

i(π(i) + ε) +∞∑

i=6m+1

i · P[Zt = i

]where the last term uses the fact that Zt is at most the size of the pool of the inactive

Page 115: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 103

policy at time t, i.e., P [Zt = i] ≤ P[Zt = i

]for all i > 0. We bound the first term of

RHS using Proposition 2.5.9, the second term using (A.3.15) and the last term using

Proposition 2.3.3.

E[Xt|E

]P[E]≤ 4√

m+ 6mε+

∞∑i=6m

mi

i!≤ 4√

m+

3

m+ 2−6m.

A.5 Proofs from Section 2.7

A.5.1 Proof of Lemma 2.7.4

In this section, we present the full proof of Lemma 2.7.4. We prove the lemma by

writing a closed form expression for the utility of a and then upper-bounding that

expression.

In the following claim we study the probability a is matched in the interval [t, t+ε]

and the probability that it leaves the market in that interval.

Claim A.5.1. For any time t ≥ 0, and ε > 0,

P [a ∈Mt,t+ε] = ε · P [a ∈ At] (2 + c(t))E[1− (1− d/m)Zt|a ∈ At

]±O(ε2)

(A.5.1)

P [a /∈ At+ε, a ∈ At] = P [a ∈ At] (1− ε(1 + c(t) + E[1− (1− d/m)Zt−1|a ∈ At

])±O(ε2))

(A.5.2)

Proof. The claim follows from two simple observations. First, a becomes critical

in the interval [t, t + ε] with probability ε · P [a ∈ At] (1 + c(t)) and if he is critical

he is matched with probability E[(1− (1− d/m)Zt−1|a ∈ At

]. Second, a may also

get matched (without getting critical) in the interval [t, t + ε]. Observe that if an

agent b ∈ At where b 6= a gets critical she will be matched with a with probability

(1− (1−d/m)Zt−1)/(Zt−1),. Therefore, the probability that a is matched at [t, t+ ε]

Page 116: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 104

without getting critical is

P [a ∈ At] · E[ε · (Zt − 1)

1− (1− d/m)Zt−1

Zt − 1|a ∈ At

]= ε · P [a ∈ At]E

[1− (1− d/m)Zt−1|a ∈ At

]The claim follows from simple algebraic manipulations.

We need to study the conditional expectation E[1− (1− d/m)Zt−1|a ∈ At

]to use

the above claim. This is not easy in general; although the distribution of Zt remains

stationary, the distribution of Zt conditioned on a ∈ At can be a very different distri-

bution. So, here we prove simple upper and lower bounds on E[1− (1− d/m)Zt−1|a ∈ At

]using the concentration properties of Zt. By the assumption of the lemma Zt is at

stationary at any time t ≥ 0. Let k∗ be the number defined in Proposition 2.5.9, and

β = (1− d/m)k∗. Also, let σ :=

√6 log(8m/β). By Proposition 2.5.9, for any t ≥ 0,

E[1− (1− d/m)Zt−1|a ∈ At

]≤ E

[1− (1− d/m)Zt−1|Zt < k∗ + σ

√4m, a ∈ At

]+ P

[Zt ≥ k∗ + σ

√4m|a ∈ At

]≤ 1− (1− d/m)k

∗+σ√

4m +P[Zt ≥ k∗ + σ

√4m]

P [a ∈ At]

≤ 1− β + β(1− (1− d/m)σ√

4m) +8√me−σ

2/3

P [a ∈ At]

≤ 1− β +2σdβ√m

m2 · P [a ∈ At](A.5.3)

Page 117: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 105

In the last inequality we used (A.1.4) and the definition of σ. Similarly,

E[1− (1− d/m)Zt−1|a ∈ At

]≥ E

[1− (1− d/m)Zt−1|Zt ≥ k∗ − σ

√4m, a ∈ At

]· P[Zt ≥ k∗ − σ

√4m|a ∈ At

]≥ (1− (1− d/m)k

∗−σ√

4m)P [a ∈ At]− P

[Zt < k∗ − σ

√4m]

P [a ∈ At]

≥ 1− β − β((1− d/m)−σ√

4m − 1)− 2√me−σ

2

P [a ∈ At]

≥ 1− β − 4dσβ√m− β3

m3 · P [a ∈ At](A.5.4)

where in the last inequality we used (A.1.4), the assumption that 2dσ ≤√m and the

definition of σ.

Next, we write a closed form upper-bound for P [a ∈ At]. Choose t∗ such that∫ t∗t=0

(2 + c(t))dt = 2 log(m/β). Observe that t∗ ≤ log(m/β) ≤ σ2/6. Since a leaves

the market with rate at least 1 + c(t) and at most 2 + c(t), we can write

β2

m2= E

(−∫ t∗

t=0

(2 + c(t))dt)≤ P [a ∈ At∗ ] ≤ E

(−∫ t∗

t=0

(1 + c(t))dt)≤ β

m(A.5.5)

Intuitively, t∗ is a moment where the expected utility of that a receives in the interval

[t∗,∞) is negligible, i.e., in the best case it is at most β/m.

By Claim A.5.1 and (??), for any t ≤ t∗,

P [a ∈ At+ε]− P [a ∈ At]ε

≤ −P [a ∈ At](

2 + c(t)− β − 4dσβ√m− β3

m3 · P [a ∈ At]±O(ε)

)≤ −P [a ∈ At]

(2 + c(t)− β − 5dσβ√

m±O(ε)

)where in the last inequality we used (A.5.5). Letting ε → 0, for t ≤ t∗, the above

differential equation yields,

P [a ∈ At] ≤ E(−∫ t

τ=0

(2 + c(τ)− β)dτ)

+2dσ3β√

m. (A.5.6)

Page 118: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 106

where in the last inequality we used t∗ ≤ σ2/6, ex ≤ 1 + 2x for x ≤ 1 and lemma’s

assumption 5dσ2 ≤√m .

Now, we are ready to upper-bound the utility of a. By (A.5.5) the expected utility

that a gains after t∗ is no more than β/m. Therefore,

E [uc(a)] ≤ β

m+

∫ t∗

t=0

(2 + c(t))E[1− (1− d/m)Zt−1|a ∈ At

]P [a ∈ At] e−δtdt

≤ β

m+

∫ t∗

t=0

(2 + c(t))((1− β)P [a ∈ At] + β/√m)e−δtdt

≤ β

m+

∫ t∗

t=0

(2 + c(t))(

(1− β)E(−∫ t

τ=0

(2 + c(τ)− β)dτ)

+3dσ3

√mβ)e−δtdt

≤ 2dσ5

√mβ +

∫ ∞t=0

(1− β)(2 + c(t))E(−∫ t

τ=0

(2 + c(τ)− β)dτ)e−δtdt.

In the first inequality we used equation (??), in second inequality we used equation

(A.5.6), and in the last inequality we use the definition of t∗. We have finally obtained

a closed form upper-bound on the expected utility of a.

Let Uc(a) be the right hand side of the above equation. Next, we show that Uc(a)

is maximized by letting c(t) = 0 for all t. This will complete the proof of Lemma 2.7.4.

Let c be a function that maximizes Uc(a) which is not equal to zero. Suppose c(t) 6= 0

for some t ≥ 0. We define a function c : R+ → R+ and we show that if δ < β, then

Uc(a) > Uc(a). Let c be the following function,

c(τ) =

c(τ) if τ < t,

0 if t ≤ τ ≤ t+ ε,

c(τ) + c(τ − ε) if t+ ε ≤ τ ≤ t+ 2ε,

c(τ) otherwise.

In words, we push the mass of c(.) in the interval [t, t + ε] to the right. We remark

that the above function c(.) is not necessarily continuous so we need to smooth it

out. The latter can be done without introducing any errors and we do not describe

Page 119: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 107

the details here. Let S :=∫ tτ=0

(1 + c(t) + β)dτ . Assuming c′(t)� 1/ε, we have

Uc(a)− Uc(a) ≥ −ε · c(t)(1− β)e−Se−δt + ε · c(t)(1− β)e−S−ε(2−β)e−δ(t+ε)

+ε(1− β)(2 + c(t+ ε))(e−S−ε(2−β)e−δ(t+ε) − e−S−ε(2+c(t)−β)e−δ(t+ε))

= −ε2 · c(t)(1− β)e−S−δt(2− β + δ) + ε2(1− β)(2 + c(t+ ε))e−S−δtc(t)

≥ ε2 · (1− β)e−S−δtc(t)(β − δ).

Since δ < β by the lemma’s assumption, the maximizer of Uc(a) is the all zero

function. Therefore, for any well-behaved function c(.),

E [uc(a)] ≤ 2dσ5

√mβ +

∫ ∞t=0

2(1− β)E(−∫ t

τ=0

(2− β)dτ)e−δtdt

≤ O(d4 log3(m)√

m)β +

2(1− β)

2− β + δ.

In the last inequality we used that σ = O(√

log(m/β)) and β ≤ e−d. This completes

the proof of Lemma 2.7.4.

A.6 Small Market Simulations

In Proposition 2.5.5 and Proposition 2.5.9, we prove that the Markov chains of the

Greedy and Patient algorithms are highly concentrated in intervals of size O(√m/d)

and O(√m), respectively. These intervals are plausible concentration bounds when

m is relatively large. In fact, most of our theoretical results are interesting when

markets are relatively large. Therefore, it is natural to ask: What if m is relatively

small? And what if the d is not small relative to m?

Figure A.2 depicts the simulation results of our model for small m and small T . We

simulated the market for m = 20 and T = 100 periods, repeated this process for 500

iterations, and computed the average loss for the Greedy, Patient, and the Omniscient

algorithms. As it is clear from the simulation results, the loss of the Patient algorithm

is lower than the Greedy for any d, and in particular, when d increases, the Patient

algorithm’s performance gets closer and closer to the Omniscient algorithm, whereas

Page 120: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 108

0 2 4 6 8 100

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

d

L(Greedy)

L(Patient)

L(OMN)

Figure A.2: Simulated Losses for m = 20. For very small market sizes and even forrelatively large values of d, the Patient algorithm outperforms the Greedy Algorithm.

Page 121: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX A. MISSING PROOFS FROM CHAPTER 2 109

the Greedy algorithm’s loss remains far above both of them.

Page 122: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

Appendix B

Missing Proofs From Chapter 3

B.1 Implementation: A Random Mechanism

In this section, we present the complete proof of Theorem 3.3.1. As discussed in the

proof overview of the theorem, the proof is constructive. We will propose an imple-

mentation mechanism (or, equivalently, a lottery) which approximately implements

a partitioned structure that satisfies the properties described in Theorem 3.3.1.

To describe the main idea of our mechanism, we need to introduce the notion

of tight and floating constraints: A constraint is tight if it is binding. This notion

is precisely defined in the following definition. First, for any block B, let x(B) =∑e∈B xe.

Definition B.1.1. A constraint S = (B, qB, qB) is tight if, either x(B) = q

Bor

x(B) = qB; otherwise, S is floating. Similarly, we say that a block B is tight when

the constraint corresponding to it (S in here) is tight.

Note that this definition naturally applies on the (implicit) constraints that for

all e ∈ E, we must have that 0 ≤ xe ≤ 1.

In the core of our randomized mechanism is a stochastic operation that we call

Operation X . We iteratively apply Operation X to the initial fractional assignment.

In each iteration t, which is symbolically depicted in Figure 3.3.1, the fractional as-

signment xt is converted to xt+1 in a way such that: (1) the number of floating

110

Page 123: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 111

constraints decreases, (2) E(xt+1|xt) = xt, and (3) xt+1 is feasible with respect to H.

The first property guarantees that after a finite (and small) number of iterations1, the

obtained assignment is pure. The second property makes sure that the resulting pure

assignment is equal to the original fractional assignment in expectation. The third

property guarantees that all hard constraints are satisfied throughout the whole pro-

cess of the mechanism. As the last step, we need to show that by iteratively applying

of Operation X , soft constraints are approximately satisfied. This is a more technical

property of Operation X , which we discuss in Subsection B.1.4. Roughly speaking,

we design Operation X in a way such that it never increases (or decreases) two (or

more) elements of a soft constraint at the same iteration. Consequently, elements of

each soft block become “negatively correlated”. It then allows us to employ Chernoff

concentration bounds to prove that soft constraints are approximately satisfied.

In the rest of this section, we design Operation X and prove that it satisfies the

above-mentioned properties.

B.1.1 Definitions

In this section, we introduce the required notions for defining Operation X .

1. For any two links e, e′, a block B is separating e, e′ if B contains exactly one of

them.

2. Given a hierarchy H, a (hard) block B ∈ H is supporting a pair of links (e, e′)

if it is the smallest block (in the number of involved edges) that contains both

e, e′, and moreover, no block in H separates e, e′.

3. We say that a hierarchy H is supporting the pair (e, e′) if there exists a block

in H which supports (e, e′). In particular, if the subset {e, e′} is in the deepest

level of H, then (e, e′) is supported by H.

4. A floating cycle is a sequence e1, . . . , el of distinct edges such that:

• xei is non-integral for all integers i.

1Our randomized mechanism stops after at most |H|+ |E| iterations.

Page 124: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 112

Figure B.1: A floating cycle of length 6

• (ei, ei+1) is supported by H1 for even integers i.

• (ei, ei+1) is supported by H2 for odd integers i.

where length the of cycle, l, is an even number and i + 1 = 1 for i = l. Figure

B.1 represents a floating cycle of length 6. A floating cycle is said to be minimal

if it does not contain a smaller floating cycle as a subset. We often drop the

minimal phrase and whenever we say a floating cycle, we refer to a minimal

floating cycle, unless otherwise specified.

Next, we define the notion of floating paths; loosely speaking, their structure is

very similar to floating cycles, except in their endpoints. Floating paths start from a

hierarchy and end in the same hierarchy if their length is even, otherwise, they end

in the other hierarchy.

5. A floating path is a sequence e1, e1, . . . , el of distinct edges such that:

• xei is non-integral for all integers i.

• There exists a ∈ {1, 2} such that if we define a = {1, 2}\{a}, then:

Page 125: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 113

– (ei, ei+1) is supported by Ha for even integers i < l.

– (ei, ei+1) is supported by Ha for odd integers i < l.

• No tight block in Ha contains e1, and no tight block in Hb contains el

where b = a if l is even and b = a if l is odd.

Figure B.2 contains a visual example of a floating path. A floating path is

said to be minimal if it does not contain a smaller floating path as a subset.

Whenever we say a floating path, we refer to a minimal floating path, unless

otherwise specified.

Figure B.2: Example of a floating path: Suppose that in the above fractional assign-ment H1 is the set of row blocks and H2 is the set of column blocks. Also, supposethe lower quotas and upper quotas are set to 0 and 1, respectively. Then, e1, e2, e3 isa (minimal) floating path. However, e1, e4, e3 is not a floating path.

Finally, we introduce the following crucial concept.

Definition B.1.2. Assume we are given a fractional assignment x. For any block B

and any ε > 0, let x↑εB denote a new (fractional) assignment in which the element

of the matrix corresponding to edge e is increased by ε if e ∈ B (i.e. it changes to

xe+ε), and it remains unchanged otherwise. Similarly, let x↓εB denote the fractional

assignment in which the element of the matrix corresponding to edge e is decreased

by ε if e ∈ B (i.e. it changes to xe − ε), and it remains unchanged otherwise.

Example B.1.3. (x↑εB)↓εB′ denotes the fractional assignment in which the value

of any edge e ∈ B − B′ becomes xe + ε, the value of any edge e ∈ B′ − B becomes

xe − ε, and the value of the rest of the edges does not change.

Page 126: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 114

B.1.2 Operation X

Operation X can be applied on a given floating cycle or a floating path of a frac-

tional assignment x (if none of them exist, then the assignment must be pure by

Lemma B.1.10). We first define this operation for a given floating cycle. Let F =

〈e1, . . . , el〉 be a floating cycle in x. Define

Fo = {ei : i is odd},

Fe = {ei : i is even}.

We call the pair (Fo, Fe) the odd-even decomposition of F . Given two non-negative

reals ε, ε′ (which we describe how to set soon), Operation X generates an assignment

x′ ∈ RN×O in one of the following ways:

• x′ = (x↑εFo)↓εFe with probability ε′

ε+ε′

• x′ = (x↓ε′ Fo)↑ε′ Fe with probability εε+ε′

.

Both ε and ε′ are chosen to be the largest possible numbers such that both of the

assignments (x↑εFo)↓εFe and (x↓ε′ Fo)↑ε′ Fe remain feasible, in the sense that they

satisfy all hard constraints.

The definition of Operation X on a floating path is the same as its definition on a

floating cycle. To summarize, we give a formal definition of the Operation X below.

Definition B.1.4. Consider a fractional assignment x and a floating path or a float-

ing cycle, namely F , given as the inputs to Operation X . Then Operation X gen-

erates a new assignment x′, where x′ = (x ↑ε Fo) ↓ε Fe with probability ε′

ε+ε′and

x′ = (x ↓ε′ Fo) ↑ε′ Fe with probability εε+ε′

, where ε, ε′ are positive numbers chosen to

be the largest possible numbers such that both (x ↑ε Fo) ↓ε Fe and (x ↓ε′ Fo) ↑ε′ Fe are

feasible assignments.

We also denote x′ (which is a random variable) by x l F .

Page 127: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 115

B.1.3 The Implementation Mechanism

Our implementation mechanism which is based on Operation X is formally defined

below.

The Implementation Mechanism Based on the Operation X :

1. A fractional assignment x is reported to the mechanism.

2. Set i to 1 and let xi = x.

3. Repeat the following as long as xi contains a floating cycle or a floating path:

(a) If xi contains a floating cycle, let F be an arbitrary floating cycle, other-

wise, let F be an arbitrary floating path.

(b) Define xi+1 to be xi l F .

(c) Increase i by one.

4. Report xi as the outcome of the mechanism.

In the rest of this section, we show that the above mechanism approximately

implements x in the sense of Definition 3.2.4.

The first step of the proof is verifying that if the assignment has no floating cycles

or paths, then it is necessarily pure. We prove this claim in Claim B.1.9. The next

step of the proof is to show that Operation X is well-defined in the sense that both ε, ε′

cannot be zero at the same time. We will state and prove this fact in Lemma B.1.10.

Next, we prove the following three important properties of Operation X :

i. The outcome of Operation X satisfies the hard constraints.

ii. Operatoin X satisfies the martingale property, i.e.

E[x l F

∣∣∣ x] = x

iii. The outcome of the Operation X has more tight constraints (compared to x).

Page 128: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 116

These properties are proved separately in three Lemmas below.

Lemma B.1.5. The outcome of Operation X satisfies the hard constraints.

Proof. By definition, Operation X chooses ε, ε′ such that both of its two possible

outcomes are feasible with respect to H.

Lemma B.1.6. Operatoin X satisfies the martingale property, i.e.

E[x l F

∣∣∣ x] = x

Proof. We prove the lemma by verifying that this property holds for any entry (i, j)

of the assignment matrix, i.e. if (x l F )(i,j) denotes the (i, j)-th element of x l F ,

then we have

E[(x l F )(i,j)

∣∣∣ x] = x(i,j).

In simple words, we prove that operation X does not change the value of entry (i, j)

of the assignment matrix in expectation.

Observe that by the definition of Operation X

E[x l F

∣∣ x] =ε′

ε+ ε′· ((x↑εFo)↓εFe) +

ε

ε+ ε′· ((x↓ε′ Fo)↑ε′ Fe) .

The claim is trivial if (i, j) 6∈ F . So, assume (i, j) ∈ F . Then, we either have

(i, j) ∈ Fo or (i, j) ∈ Fe:

1. If (i, j) ∈ Fo, then Operation X increases x(i,j) by ε with probability ε′

ε+ε′and

decreases it by ε′ with probability εε+ε′

. In this case, the expected amount by

which x(i,j) changes is equal to ε · ε′

ε+ε′− ε′ · ε

ε+ε′= 0.

2. If (i, j) ∈ Fe, then Operation X decreases x(i,j) by ε with probability ε′

ε+ε′, and

increases it by ε′ with probability εε+ε′

. In this case, the expected amount by

which x(i,j) changes is equal to −ε · ε′

ε+ε′+ ε′ · ε

ε+ε′= 0.

This proves the lemma.

Page 129: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 117

Lemma B.1.7. The outcome of operation X has more tight constraints (compared

to x).

Proof. Suppose F is a floating cycle in x. The proof for the path case is almost

identical. We show that x l F has more tight constraints than x. To do so, we first

show that a tight constraint remains tight after Operations X . Second, we show that

at least one of the floating constraints in x becomes tight in x l F .

To prove the first step, we show that for any tight constraint S, its corresponding

block, B, contains an equal number of elements (edges) from the sets Fo and Fe. This

fact is formally proved below.

Claim B.1.8. Suppose we are given a floating cycle F in the fractional assignment

x, and let (Fo, Fe) be the odd-even decomposition of F . Then, any tight block (in x)

contains an equal number of elements from Fo and Fe.

Proof. Let S = (B, qB, qB) be a tight constraint and w.l.o.g. assume B ∈ H1. Then,

it must be that for any element ei ∈ B ∩ Fe, the element that comes right after ei

in F , i.e. ei+1, belongs to B. This holds because by the definition of floating cycles,

(ei, ei+1) is supported by H1, which means no tight block in H1 separates ei, ei+1.

Consequently, both ei and ei+1 belong to B, or else B itself would separate ei, ei+1.

Therefore, for any element ei ∈ B∩Fe, there exists a distinct element ei+1 ∈ B∩Fowhich corresponds to ei. Similarly, any element in B ∩ Fo corresponds to a distinct

element in B ∩ Fe. This proves the claim.

Now recall that whenever Operation X increases (decreases) the elements in Fo,

it decreases (increases) the elements in Fe. This fact and Claim B.1.8 together imply

that x(B) = (x l F ) (B) (regardless of the choice of ε, ε′). This ensures that any tight

constraint remains tight after operation X .

We now prove the second step, which is to show that at least one of the floating

constraints in x becomes tight in x l F . Observe that any floating constraint S =

(B, qB, qB) provides a positive slack for setting the values of ε, ε′. In simple words,

since S is a floating constraint, we have that qB< x(B) < qB. By this fact, we

can compute the positive upper bounds that S imposes on ε, ε′. Finally, taking the

Page 130: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 118

minimum of these upper bounds (over all floating constraints S) determines the values

for ε, ε′. We formalize this argument below. Let

s =qB − x(B) ,

s =x(B)− qB,

k =|Fo ∪B| − |Fe ∪B| .

Then, in order to guarantee that x l F satisfies constraint S, the following inequalities

(that can be translated into upper bounds) are imposed on ε, ε′ by Operation X :ε · k ≤ s if k ≥ 0

ε · |k| ≤ s if k < 0(B.1.1)

ε′ · k ≤ s if k ≥ 0

ε′ · |k| ≤ s if k < 0(B.1.2)

Now, let u(S), u′(S) respectively denote the (positive) upper bounds imposed by

Inequalities (B.1.1),(B.1.2) on ε, ε′. By definition of ε, ε′, we have that ε = minS u(S)

and ε′ = minS u′(S) where the minimum is over all the floating constraints S. This

argument implies that:

Claim B.1.9. Operation X chooses ε, ε′ such that ε, ε′ > 0.

Proof. It is enough to show that u(S), u′(S) > 0 for all S. This is implied by noting

that, given a floating constraint S, we have s, s > 0.

The above argument also implies the existence of a floating constraint S1 for

which one of the corresponding inequalities in (B.1.1) is tight. Similarly, there exists

a floating constraint S2 for which one of the corresponding inequalities in (B.1.2) is

tight. These two facts imply that after operation X , either S1 or S2 becomes a tight

constraint.

Page 131: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 119

To summarize, we first showed that if a constraint is tight, then it remains tight

after operation X . Moreover, we showed that there always exists at least one floating

constraint which becomes tight after operation X . Therefore, the number of tight

constraints decreases, which proves the lemma.

Next, we show that if a fractional assignment contains neither a floating cycle

nor a floating path, then it must be a pure assignment. This guarantees that the

assignment generated by our implementation mechanism is always pure.

Lemma B.1.10. An assignment is pure if and only if it does not contain floating

cycles and floating paths.

Proof. One direction is trivial: if the assignment is pure then it has no floating cycles

or floating paths. We prove the other direction by showing that any assignment x

which is not pure contains a floating path or a floating cycle. Since x is not pure, it

must contain a floating edge e, i.e. an edge e with 0 < xe < 1. We say that a floating

edge e is H1-loose (H2-loose) if no tight block in H1 (H2) contains e. We say that e

is loose if it is either H1-loose or H2-loose.

We need another definition before presenting the proof. Suppose S = (B, qB, qB)

is a tight hard constraint and e is a floating edge in B. Since S is tight, and since

the quotas qB, qB are integral, then B must also contain another floating edge e′. We

denote this edge by p(e, B). If there is more than one such edge, then let p(e, B)

denote one of them arbitrarily.

The proof has two cases, either there is a floating edge which is loose, or there is

no such edge.

Case 1: There exists a loose edge. As the first step of the proof, note that we

are done if there exists a floating edge which is both H1-loose and H2-loose: the edge

would form a floating path of length 1. So, w.l.o.g. suppose there is a floating edge

e which is not H2-loose. In this case, we iteratively construct a floating path that

starts from edge e, i.e. a path F = 〈e1, . . . , el〉 such that e1 = e. At the end, our

iterative construction will either find such a path, or we will find a floating cycle.

Page 132: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 120

Since e1 is not H2-loose, then there must be a minimal tight block B1 ∈ H2 that

contains e1. Since B1 is tight, and since the quotas are integral, then B1 must also

contain another floating edge p(e1, B1). We extend our (under construction) floating

path by setting e2 = p(e1, B1). Now, if e2 is H1-loose, then 〈e1, e2〉 is a floating

path and the proof is complete. So, suppose e2 is not H1-loose. Consequently, there

must be a minimal tight block B2 ∈ H1 that contains e2. Similar to before, B2 must

contain another floating edge p(e2, B2); we extend F by setting e3 = p(e2, B

2).

By repeating this argument, we can extend F iteratively until the new floating

edge that is added to F , namely ek, either (i) is loose, or (ii) is contained in one of

the previous tight blocks B1, . . . , Bk−1. If case (i) happens, then F is a floating path

and we are done. If case (ii) happens, then we have found a floating cycle: suppose

ek ∈ Bj with j < k. Then, it is straight-forward to verify that 〈ej+1, . . . , ek〉 is a

floating cycle.

Case 2: There is no loose edge. Similar to Case 1, we iteratively construct a

floating cycle F = 〈e1, . . . , el〉. The cycle starts from a floating edge e; initially, we

have e1 = e. Since e1 is not loose, there must be minimal tight blocks B0 ∈ H1

and B1 ∈ H2 such that e1 ∈ B0 and e1 ∈ B1. Then, let e2 = p(e1, B1). Similarly,

since e2 is not loose, there must be a tight block B2 ∈ H1 such that e2 ∈ B2. Let

e3 = p(e2, B2). By applying this argument repeatedly, we can extend F until the

new floating edge that is added to F , namely ek, satisfies ek ∈ Bj for some j with

0 ≤ j < k. Then, it is straight-forward to verify that 〈ej+1, . . . , ek〉 is a floating

cycle.

B.1.4 Approximate Satisfaction of Soft Constraints

Here we prove that soft constraints are approximately satisfied in the sense of Defi-

nition 3.2.4. Loosely speaking, Operation X is designed in a way such that it never

increases (or decreases) two (or more) elements of a soft constraint at the same itera-

tion. Consequently, elements of each soft constraint become “negatively correlated”.

This allows us to employ Chernoff concentration bounds to prove that soft constraints

are approximately satisfied.

Page 133: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 121

We show the approximate satisfaction of soft constraints by proving two lemmas

below. In the first lemma, we formally (define and) prove that elements of each soft

constraint are “negatively correlated”; the proof uses a negative correlation proof

technique from [74]. Then, in the second lemma, we prove the approximate satisfac-

tion of soft constraints by applying Chernoff concentration bounds. Before stating

the lemmas, we recall the definition of negative correlation.

Definition B.1.11. For an index set B, a set of binary random variables {Xe}e∈Bare negatively correlated if for any subset T ⊆ B we have

Pr

[∏e∈T

Xe = 1

]≤∏e∈T

Pr [Xe = 1] , (B.1.3)

Pr

[∏e∈T

(1−Xe) = 1

]≤∏e∈T

Pr [Xe = 0] . (B.1.4)

Lemma B.1.12. Let {Xe}e∈E denote the set of random variables which represent the

outcome of the implementation mechanism (i.e. the integral assignment); also, let

B be a block corresponding to an arbitrary soft constraint. Then, the set of random

variables {Xe}e∈B are negatively correlated.

Proof. We need to show that (??) and (??) hold for any subset T ⊆ B. We fix an

arbitrary subset T and prove (??) for it; the proof for (??) is identical and follows by

replacing the role of zeros and ones. Since the random variables are binary, we can

prove (??) by showing that

E

[∏e∈T

Xe

]≤∏e∈T

E [Xe] =∏e∈T

xe. (B.1.5)

To prove (??), we introduce a set of random variables {Xe,i} where Xe,i denotes

the value of entry e of the matrix after the i-th execution of operation X . So we

would have Xe,0 = xe for all e. Inductively, we show that for all i:

E

[∏e∈T

Xe,i+1

]≤ E

[∏e∈T

Xe,i

]. (B.1.6)

Page 134: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 122

The lemma is proved if (??) holds: Assuming that operation X is executed j

times, using (??) we can write

E

[∏e∈T

Xe

]= E

[∏e∈T

Xe,j

]≤ E

[∏e∈T

Xe,0

]=∏e∈T

xe

which shows (??) holds and proves the lemma.

To prove (??), we can alternatively show that

E

[∏e∈T

Xe,i+1

∣∣∣∣∣ {Xe,i}e∈T

]≤∏e∈T

Xe,i. (B.1.7)

We consider three cases to prove (??): since B is in the deepest level of a hierarchy,

then operation X changes either 0, 1, or 2 elements of T . We prove this fact in a

separate claim below.

Claim B.1.13. Suppose T is a block in the deepest level of a hierarchy, then, Oper-

ation X changes either 0, 1, or 2 elements of T .

Proof. W.L.O.G. assume that T is in the deepest level of H1. We prove a stronger

claim. Let T ′ be the largest subset of links that contains T and is in the deepest level

of H1. We prove that Operation X changes at most 2 elements of T ′. To this end,

let F be the floating cycle or path used in Operation X . We need to show that F

contains at most 2 elements of T ′; this proves the claim.

For contradiction, suppose F contains at least 3 elements of T ′. Let the elements

of F be denoted by the sequence e1, . . . , el, and let ei, ej, ek be the first three elements

of T ′ which appear in F , where i < j < k.

First, note that by the definitions of floating cycle and floating path, we must have

that j = i+ 1. We will prove that 〈ej, ej+1 . . . , ek−1, ek〉 makes a floating cycle, which

contradicts with the minimality of F (recall that by definition, operation X always

chooses minimal floating paths and cycles). To this end, first note that (ej, ej+1) is

supported by H2: this holds because ej−1, ej ∈ T ′, which means (ej−1, ej) is supported

by H1. Consequently, (ej, ej+1) must be supported by H2 since F is a floating path

or cycle. Similarly, (ej+1, ej+2) is supported by H1, (ej+2, ej+3) is supported by H2,

Page 135: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 123

and so on and so forth. Finally, note that (ek, ej) is supported by H1, since ek, ej ∈T ′. This proves that 〈ej, ej+1 . . . , ek−1, ek〉 is a floating cycle, which concludes the

claim.

We continue the proof of lemma by considering each of the three cases separately.

The proof is trivial if Operation X changes 0 elements of T : (??) holds with equality.

So, it remains to consider the two other cases.

First, assume that Operation X changes exactly one element of T , namely e′ ∈ T .

Let T ′ = T\{e′}. Then we have

E

[∏e∈T

Xe,i+1

∣∣∣∣∣ {Xe,i}e∈T

]

=ε′

ε+ ε′· (Xe′,i + ε) ·

∏e∈T ′

Xe,i +ε

ε+ ε′· (Xe′,i − ε′) ·

∏e∈T ′

Xe,i =∏e∈T

Xe,i

which proves (??) with equality in this case. It remains to prove (??) for the case when

Operation X changes exactly 2 elements of T , namely e′, e′′ ∈ T . Let T ′′ = T\{e′, e′′}.Then, w.l.o.g. we can write:

E

[∏e∈T

Xe,i+1

∣∣∣∣∣ {Xe,i}e∈T

]

=ε′

ε+ ε′· (Xe′,i + ε)(Xe′′,i − ε) ·

∏e∈T ′′

Xe,i +ε

ε+ ε′· (Xe′,i − ε′)(Xe′′,i + ε′) ·

∏e∈T ′′

Xe,i

=∏e∈T

Xe,i − εε′ ·∏e∈T ′′

Xe,i

≤∏e∈T

Xe,i

which proves (??) in the third case as well. This finishes the proof of lemma.

Lemma B.1.14. The randomized mechanism based on Operation X satisfies the soft

constraints approximately in the sense of Definition 3.2.4.

Proof. Based on Definition 3.2.4, we need to prove that for any soft constraint defined

Page 136: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 124

on a block B of the links with∑

e∈B xe = µ, and for any ε > 0, we have

Pr

(∑e∈B

weXe − µ < −εµ

)≤ e−µ

ε2

2 ,

Pr

(∑e∈B

weXe − µ > εµ

)≤ e−µ

ε2

3 .

These probabilistic bounds, as we mentioned before, are known as Chernoff concen-

tration bounds (see Section B.3 for more details). These bounds hold on any set

of binary random variables which are negatively correlated [13]. Lemma B.1.12 just

says that the set of random variables {Xe}e∈B are negatively correlated, which means

Chernoff concentration bounds hold for {Xe}e∈B.

B.2 Average Performance of the Matching Algo-

rithm

In this section, we implement our matching algorithm on an example. The goal of

this example is to show that the average performance of our matching algorithm is

much better than the worst-case bounds that one can theoretically prove. For the

sake of clarity, we use a simple example with multiple intersecting constraints.

Setup of example: Consider a school choice setting, with 10 schools and 10000

students. Suppose each school has a capacity for 1000 students. Also, suppose half

of the students are from the walk-zone of schools 1, 2, 3, 4, and 5 and the other half

are from the walk-zone of schools 6, 7, 8, 9, and 10. Also, half of the students are

categorized as low-socioeconomic status (LSES) students, and half of the students are

male. Suppose all students have the same utility function (or rank-order list) over

schools.

Hard and soft constraints: The only hard constraints imposed on this problem

are “all-row” constraints: All student should be assigned to exactly one school. All

Page 137: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 125

Figure B.3: The empirical probability of violating a constraint from below by ε%(equivalently, Pr(dev− ≥ εµ)), where µ = 500. The probability is calculated by run-ning the matching algorithm for T = 1000 times and then computing the probabilityof admitting less than 500(1− ε) students.

schools have three diversity goals that we model them as soft constraints: Their goal

is to admit 500 students (i.e. 50% of their capacity) from the students of their own

walk-zone, 500 student from LSES students, and 500 female students.

Fractional assignment: Let x be a fractional assignment, where x(i,j) = 110

for all

pairs (i, j). One can easily show that x satisfies all hard and soft constraints exactly.2

Simulation: We implement this fractional assignment by our matching algorithm

based on Operation X for 1000 times. We then calculate the “empirical probability”

of violating each one of the diversity constraints by a factor of ε = 1%, 2%, · · · , 10%.

Figure B.3 illustrates the empirical probability of admitting less than 500(1 −ε) students of a specific diversity type. As can be seen, the average performance

of our matching algorithm is much better than the worst-case bound that we can

2It is also clear that because of the symmetry of the problem, this assignment is fair and Paretoefficient.

Page 138: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 126

Figure B.4: The empirical probability of violating a constraint from above by ε%(equivalently, Pr(dev+ ≥ εµ)), where µ = 500. The probability is calculated by run-ning the matching algorithm for T = 1000 times and then computing the probabilityof admitting more than 500(1 + ε) students.

theoretically prove. Figure B.4 illustrates the empirical probability of admitting more

than 500(1 + ε) students of a specific diversity type. Again, the average performance

of our matching algorithm is much better than the theoretical worst-case bound.

B.3 Chernoff Bounds

Let X1, . . . , Xn be a sequence of n independent random binary variables such that

Xi = 1 with probability pi and Xi = 0 with probability 1 − pi. Also, let µ =∑ni=1 E[Xi]. Then for any ε with 0 ≤ ε ≤ 1 we have:

Pr

[n∑i=1

Xi > (1 + ε)µ

]≤ e−ε

2µ/3 (B.3.1)

Pr

[n∑i=1

Xi < (1− ε)µ

]≤ e−ε

2µ/2. (B.3.2)

Page 139: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

APPENDIX B. MISSING PROOFS FROM CHAPTER 3 127

Moreover, the above inequalities still hold if the variables X1, . . . , Xn are nega-

tively correlated. (We refer the reader to Definition B.1.11 for the formal definition

of negative correlation)

Page 140: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

Bibliography

[1] Abdulkadiroglu, Atila, Parag A. Pathak, and Alvin E. Roth. “The New York city

high school match.” American Economic Review (2005): 364-367. 66, 81

[2] Abdulkadiroglu, Atila, and Tayfun Sonmez. “Random serial dictatorship and the

core from random endowments in house allocation problems.” Econometrica 66.3

(1998): 689-701. 66

[3] http://plato.stanford.edu/entries/affirmative-action/ 81

[4] Abdulkadiroglu, Atila, and Tayfun Snmez. “School choice: A mechanism design

approach.” American economic review (2003): 729-747. 81

[5] A. A. Ageev , M. I. Sviridenko. “Pipage Rounding: a New Method of Construct-

ing Algorithms with Proven Performance Guarantee”. Journal of Combinatorial

Optimization 8(3): 307-328 (2004).

[6] Akbarpour, Mohammad, Shengwu Li, and Shayan Oveis Gharan. “Dynamic

matching market design.” Available at SSRN 2394319 (2014). 3

[7] Akbarpour, Mohammad, and Afshin Nikzad. “Approximate random allocation

mechanisms.” Available at SSRN 2422777 (2014). 3

[8] Akkina, S. K., Muster, H., Steffens, E., Kim, S. J., Kasiske, B. L., and Israni,

A. K. (2011). Donor exchange programs in kidney transplantation: rationale and

operational details from the north central donor exchange cooperative. American

Journal of Kidney Diseases, 57(1):152–158. 8

128

Page 141: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

BIBLIOGRAPHY 129

[9] Anderson, R., Ashlagi, I., Gamarnik, D., and Kanoria, Y. (2014). A dynamic

model of barter exchange. Technical report, Working paper, 2013. New York,

NY, USA, 2013. ACM. ISBN 978-1-4503-1962-1. doi: 10.1145/2482540.2482569.

URL http://doi. acm. org/10.1145/2482540.2482569. 12

[10] Arnosti, Nick and Johari, Ramesh and Kanoria, Yash, “Managing congestion in

decentralized matching markets,” Proceedings of the fifteenth ACM conference on

Economics and computation (2014), 451–451. 13

[11] Ashlagi, I., Jaillet, P., and Manshadi, V. H. (2013). Kidney exchange in dynamic

sparse heterogenous pools. In EC, pages 25–26. 13

[12] Athey, S. and Segal, I. (2007). Designing efficient mechanisms for dynamic bi-

lateral trading games. AER, 97(2):131–136. 13

[13] Edited by Anne Auger and Benjamin Doerr. “Theory of Randomized Search

Heuristics Foundations and Recent Developments”, Volume 1, Chapter 1. Feb

2011. 124

[14] Awasthi, P. and Sandholm, T. (2009). Online stochastic optimization in the

large: Application to kidney exchange. In IJCAI, volume 9, pages 405–411. 13

[15] Baccara, Mariagiovanna, and Ayse Imrohorolu. “A field study on matching with

network externalities.” The American Economic Review 102.5 (2012): 1773-1804.

66

[16] Bertsekas, D. P. (2000). Dynamic Programming and Optimal Control. Athena

Scientific, 2nd edition. 17

[17] Birkhoff, Garrett. “Three observations on linear algebra.” Revi. Univ. Nac. Tu-

cuman, ser A 5 (1946): 147-151. 64

[18] Bloch, F. and Houy, N. (2012). Optimal assignment of durable objects to suc-

cessive agents. Economic Theory, 51(1):13–33. 12

Page 142: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

BIBLIOGRAPHY 130

[19] Bogomolnaia, Anna, and Herve Moulin. “A new solution to the random assign-

ment problem.” Journal of Economic Theory 100.2 (2001): 295-328. 61, 66

[20] Border, Kim C. “Implementation of reduced form auctions: A geometric ap-

proach.” Econometrica 59.4 (1991): 1175-1187. 67

[21] Budish, Eric. “The combinatorial assignment problem: Approximate competitive

equilibrium from equal incomes.” Journal of Political Economy 119.6 (2011): 1061-

1103. 66, 67

[22] Budish, Eric. “Matching ‘versus’ mechanism design.” SIGecom Excehanges 11.2

(2012): 4-15.

[23] Budish, Eric, Yeon-Koo Che, Fuhito Kojima, and Paul Milgrom. 2013. “De-

signing Random Allocation Mechanisms: Theory and Applications.” American

Economic Review, 103(2): 585-623.

[24] Budish, Eric. “The Combinatorial Assignment Problem: Approximate Competi-

tive Equilibrium from Equal Incomes”. Journal of Political Economy Vol. 119(6),

Dec 2011, pp 1061-1103 61, 67, 70

[25] Bulow, J. and Klemperer, P. (1996). Auctions versus negotiations. The American

Economic Review, pages 180–194. 11

[26] Che, Yeon-Koo, Jinwoo Kim, and Konrad Mierendorff. “Generalized reduced-

form auctions: a network-flow approach.” Available at SSRN 1957071 (2011).

67

[27] Che, Yeon-Koo, and Fuhito Kojima. “Asymptotic equivalence of probabilistic

serial and random priority mechanisms.” Econometrica 78.5 (2010): 1625-1672.

66, 85

[28] Chekuri, Chandra, Jan Vondrak, and Rico Zenklusen. “Dependent randomized

rounding via exchange properties of combinatorial structures.” Foundations of

Computer Science (FOCS), 2010 51st Annual IEEE Symposium on. IEEE, 2010.

68

Page 143: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

BIBLIOGRAPHY 131

[29] Chen, Yan, and Tayfun Sonmez. “Improving efficiency of on-campus housing: an

experimental study.” The American Economic Review 92.5 (2002): 1669-1686. 66

[30] Crawford, V. P. and Knoer, E. M. (1981). Job matching with heterogeneous

firms and workers. Econometrica, pages 437–450. 6

[31] Dickerson, J. P., Procaccia, A. D., and Sandholm, T. (2012). Dynamic matching

via weighted myopia with application to kidney exchange. In AAAI. 13

[32] Ehlers, Lars, Isa E. Hafalir, M. Bumin Yenmez, and Muhammed A. YILDIRIMY.

“School choice with controlled choice constraints: Hard bounds versus soft

bounds.” (2011). 67

[33] Erdos, P. and Renyi, A. (1960). On the evolution of random graphs. In MATH-

EMATICAL INSTITUTE OF THE HUNGARIAN ACADEMY OF SCIENCES,

pages 17–61. 15

[34] Feldman, J., Mehta, A., Mirrokni, V., and Muthukrishnan, S. (2009). Online

stochastic matching: Beating 1-1/e. In FOCS, pages 117–126. 13

[35] Gale, D. and Shapley, L. S. (1962). College admissions and the stability of

marriage. The American Mathematical Monthly, 69(1):9–15. 6

[36] Gallien, J. (2006). Dynamic mechanism design for online commerce. OR,

54(2):291–310. 13

[37] Goel, G. and Mehta, A. (2008). Online budgeted matching in random input

models with applications to adwords. In SODA, pages 982–991. 13

[38] Grimmett, G. and Stirzaker, D. (1992). Probability and random processes. Oxford

University Press, 2nd edition. 91

[39] Hafalir, Isa E., M. Bumin Yenmez, and Muhammed A. Yildirim. “Effective af-

firmative action in school choice.” Theoretical Economics 8.2 (2013): 325-363.

81

Page 144: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

BIBLIOGRAPHY 132

[40] Hatfield, John William. “Strategy-proof, efficient, and nonbossy quota alloca-

tions.” Social Choice and Welfare 33.3 (2009): 505-515. 66

[41] Hatfield, J. W. and Kojima, F. (2010). Substitutes and stability for matching

with contracts. Journal of Economic Theory, 145(5):1704–1723. 6

[42] Hatfield, J. W. and Milgrom, P. R. (2005). Matching with contracts. AER, pages

913–935. 6, 60

[43] Hylland, Aanund, and Richard Zeckhauser. “The efficient allocation of individ-

uals to positions.” The Journal of Political Economy (1979): 293-314. 61, 66

[44] Immorlica, Nicole, and Mohammad Mahdian.“Marriage, honesty, and stability.”

Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algo-

rithms. Society for Industrial and Applied Mathematics, 2005.

[45] Jacobs, Lesley A. “Pursuing equal opportunities: the theory and practice of

egalitarian justice.” Cambridge University Press, 2004. 81

[46] Karp, R. M. and Sipser, M. (1981). Maximum matchings in sparse random

graphs. In FOCS, pages 364–375.

[47] Karp, R. M., Vazirani, U. V., and Vazirani, V. V. (1990). An optimal algorithm

for on-line bipartite matching. In STOC, pages 352–358. 13

[48] Kamada, Yuichiro, and Fuhito Kojima. “Efficient Matching Under Distributional

Constraints: Theory and Applications” (2013).

[49] Kelso Jr, A. S. and Crawford, V. P. (1982). Job matching, coalition formation,

and gross substitutes. Econometrica, pages 1483–1504. 6, 60

[50] Kojima, Fuhito. “Random assignment of multiple indivisible objects.” Mathe-

matical Social Sciences 57.1 (2009): 134-142. 66

[51] Kojima, Fuhito, and Parag A. Pathak.“Incentives and stability in large two-sided

matching markets.” The American Economic Review (2009): 608-627.

Page 145: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

BIBLIOGRAPHY 133

[52] Kojima, Fuhito, and Mihai Manea. “Incentives in the probabilistic serial mech-

anism.” Journal of Economic Theory 145.1 (2010): 106-123. 66

[53] Kominers, Scott Duke, and Tayfun Sonmez. “Designing for diversity in match-

ing.” EC. 2013. 81

[54] Kurino, M. (2009). House allocation with overlapping agents: A dynamic mech-

anism design approach. Jena economic research papers, (075). 12

[55] Leshno, J. D. (2012). Dynamic matching in overloaded systems. http://www.

people.fas.harvard.edu/~jleshno/papers.html2. 12

[56] Levin, D. A., Peres, Y., and Wilmer, E. L. (2006). Markov chains and mixing

times. American Mathematical Society. 37, 91

[57] Lim, N., Marquis, J. P., Hall, K. C., Schulker, D., and Zhuo, X. (2009). Officer

classification and the future of diversity among senior military leaders: A case

study of the Army ROTC. RAND NATIONAL DEFENSE RESEARCH INST

SANTA MONICA CA.

[58] Manea, Mihai. ”Asymptotic ordinal inefficiency of random serial dictatorship.”

Theoretical Economics 4.2 (2009): 165-197. 66

[59] Manshadi, V. H., Oveis Gharan, S., and Saberi, A. (2012). Online stochastic

matching: Online actions based on offline statistics. MOR, 37(4):559–573. 13

[60] Matthews, Steven A.“On the implementability of reduced form auctions.” Econo-

metrica: Journal of the Econometric Society (1984): 1519-1522. 67

[61] Mehta, A., Saberi, A., Vazirani, U., and Vazirani, V. (2007). Adwords and

generalized online matching. JACM, 54(5):22. 13

[62] Myerson, R. B. (1981). Optimal auction design. MOR, 6(1):58–73. 13

[63] Nguyen, Thanh, Ahmad Peivandi, and Rakesh Vohra. “One-sided matching with

limited complementarities.” (2014).67, 68

Page 146: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

BIBLIOGRAPHY 134

[64] Norris, J. (1998). Markov Chains. Number No. 2008 in Cambridge Series in

Statistical and Probabilistic Mathematics. Cambridge University Press. 37, 38,

90, 91

[65] Pai, M. M. and Vohra, R. (2013). Optimal dynamic auctions and simple index

rules. MOR. 13

[66] Parkes, D. C. (2007). Online mechanisms. In Nisan, N., Roughgarden, T.,

Tardos, E., and Vazirani, V., editors, Algorithmic Game Theory, pages 411–439.

Cambridge University Press, Cambridge. 13

[67] Parkes, D. C. and Singh, S. P. (2003). An mdp-based approach to online mech-

anism design. In Advances in neural information processing systems, page None.

13

[68] Pycia, Marek, and Utku Unver, “Decomposing random mechanisms”, Available

at SSRN 2189235, 2012. 67

[69] Rubinstein, Aviad. ”Inapproximability of Nash Equilibrium.” arXiv preprint

arXiv:1405.3322 (2014). 67

[70] Roth, Alvin E. “Repugnance as a Constraint on Markets.” No. w12702. National

Bureau of Economic Research, 2006.

[71] Roth, A. and Sotomayor, M. (1992). Two-Sided Matching: A Study in Game-

Theoretic Modeling and Analysis. Econometric Society Monographs. Cambridge

University Press. 6

[72] Roth, A. E., Sonmez, T., and Unver, M. U. (2004). Kidney exchange. The

Quarterly Journal of Economics, 119(2):457–488. 6

[73] Roth, A. E., Sonmez, T., and Unver, M. U. (2007). Efficient kidney exchange:

Coincidence of wants in markets with compatibility-based preferences. AER, pages

828–851. 6

Page 147: ALGORITHMIC MARKET DESIGN A DISSERTATION SUBMITTED …xf731pn2513/Thesis-Akbarpour... · Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice

BIBLIOGRAPHY 135

[74] Samir Khuller , Srinivasan Parthasarathy , Aravind Srinivasan. “Dependent

rounding and its applications to approximation algorithms.” Journal of the ACM,

Vol. 53, 324-360, 2006. 68, 77, 121

[75] Su, X. and Zenios, S. A. (2005). Patient choice in kidney allocation: A sequential

stochastic assignment model. OR, 53(3):443–455. 13

[76] Unver, M. U. (2010). Dynamic kidney exchange. The Review of Economic

Studies, 77(1):372–414. 13

[77] Von Neumann, John. “A certain zero-sum two-person game equivalent to the

optimal assignment problem.” Contributions to the Theory of Games 2 (1953):

5-12. 64

[78] Wiesner, R., Edwards, E., Freeman, R., Harper, A., Kim, R., Kamath, P., Kre-

mers, W., Lake, J., Howard, T., Merion, R. M., Wolfe, R. A., and Krom, R.

(2003). Model for end-stage liver disease (meld) and allocation of donor livers.

Gastroenterology, 124(1):91 – 96. 20

[79] Zenios, S. A. (2002). Optimal control of a paired-kidney exchange program. MS,

48(3):328–342. 13