mechanism design with aftermarkets: on the optimality...
TRANSCRIPT
Mechanism Design with Aftermarkets:
On the Optimality of Cutoff Mechanisms
PRELIMINARY AND INCOMPLETE
Piotr Dworczak∗
January 2, 2017
Abstract
I study a mechanism design problem of allocating a single good to one of several agents.
The mechanism is followed by an aftermarket, that is, a post-mechanism game played be-
tween the agent who acquired the good and a third-party market participant. The designer
has preferences over final outcomes, but she cannot redesign the aftermarket. However, she
can influence its information structure by disclosing information elicited by the mechanism,
subject to providing incentives for agents to report truthfully.
A companion paper (Dworczak, 2016) introduces a class of cutoff mechanisms, charac-
terizes their properties, and derives the optimal mechanism within the class. In this paper,
under the assumption that the aftermarket payoffs are determined by a binary decision of the
third party, I provide sufficient conditions for optimality of cutoff mechanisms.
I also analyze a version of the model in which cutoff mechanisms are sometimes suboptimal.
I derive robust payoff bounds on their performance, and show that by using a cutoff mechanism
the designer can often guarantee a large fraction of the payoff of the optimal (non-cutoff)
mechanism.
Keywords: Mechanism Design, Information Design, Auctions
JEL codes: D44, D47, D82, D83.
∗Stanford University, Graduate School of Business, [email protected]. I would like to thankDarrell Duffie, Paul Milgrom, Michael Ostrovsky, Alessandro Pavan, and Andy Skrzypacz for helpfuldiscussions.
1 Introduction 1
1 Introduction
Real-life mechanisms rarely take place in a vacuum. Auctions are often embedded in
markets in which bidders can resell the items they bought. Procurement mechanisms
are followed by bargaining between winners of contracts and subcontractors. While
such post-mechanism interactions may be beyond the direct control of the mechanism
designer, the designer can nevertheless influence their outcome by disclosing information
elicited by the mechanism. For example, by revealing the price paid by the winning
bidder in a first price auction, an auctioneer changes the information structure of a
post-auction resale game. This can lead to a different level and split of surplus in
the aftermarket, and hence different behavior of bidders in the auction. In mechanism
design with aftermarkets, the designer chooses an ex-post information disclosure as part
of the mechanism.
This paper contains results that complement and extend the analysis of the pa-
per “Mechanism Design with Aftermarkets: Cutoff Mechanisms” (Dworczak, 2016). I
study the same model of mechanism design with aftermarkets, where the aftermarket
is defined as an interaction between the agent who acquired the object in the first-stage
mechanism and a third-party market participant. Instead of restricting attention to the
class of cutoff mechanisms, I impose a simplifying assumption on the structure of the
aftermarket: I assume that the outcome in the aftermarket is determined by a binary
decision of the third party. As a result, it is sufficient to look at binary signals in
the mechanism. This simplification allows me to characterize the set of implementable
allocation and disclosure rules, and then solve for optimal mechanisms.
The reader is referred to Dworczak (2016) for a discussion of the importance of
aftermarkets in real-life design problems. In this paper, I focus on the methodological
contribution, and on the economic intuitions underlying the results on optimality of
cutoff mechanisms. Cutoff mechanisms have a number of properties that were studied
extensively in Dworczak (2016). Here, I abstract away from those properties, and
focus solely on the issue of (Bayesian) optimality – when is it that a cutoff mechanism
achieves the highest value of the objective function of the mechanism designer among
all incentive-compatible mechanisms?
To answer this question, I first study an auxiliary problem of pure information
intermediation. This is a problem in which the designer does not control the allocation of
the physical good in the mechanism. That is, the designer acts only as a an information
mediator: Under commitment, she can send signals and exchange payments with the
1 Introduction 2
agent as a function of the agent’s report. After the signal is sent by the mechanism,
the agent interacts with the third party in the aftermarket. The question of interest is
whether the designer can influence the outcome of the aftermarket. The first result of
the paper (Theorem 1) provides conditions under which the answer is negative. The
condition requires that the agent’s and the third party’s preferences are misaligned:
When lower types of the agent have a higher willingness to pay for the high action of
the third party, the third party prefers to take the high action only if she believes that
the type of the agent is high. This is satisfied, for example, when the third party buys
a good from the agent (the high action corresponds to a high resale price).1 Formally,
the condition is expressed in terms of sub/supermodularity of the agent’s and the third
party’s utility functions in the type of the agent and the action of the third party. I
refer to the case of misaligned preferences as counter-modularity : When the agent’s
utility is submodular, the third party’s utility is supermodular (or vice versa). When
utilities of players are co-modular, the result is false: The designer can successfully
disclose information about the agent’s type to the aftermarket.
The analysis of pure information intermediation helps explain the main contribution
of the paper, that is, conditions for optimality of cutoff mechanisms (Theorems 2 - 4).
When the utility functions of the agent and the third party are counter-modular, and
certain additional conditions hold, a cutoff mechanism is optimal. (The additional
conditions restrict the set of possible objective functions of the mechanism designer,
and require that preferences of the agent take a strong form of sub/supermodularity.)
As shown in Dworczak (2016), cutoff mechanisms, which are mechanisms that only
disclose information about a cutoff (and not about the type of the agent directly), are
always feasible. That is, regardless of the form of the aftermarket, information about the
cutoff can be revealed without violating incentive constraints in the mechanism. When
preferences of the agent and the third party satisfy counter-modularity, as shown in
Theorem 1, it is hard to disclose information about the agent’s type directly. Therefore,
revealing information about the cutoff alone is optimal.
In the setting of pure information intermediation, the corresponding cutoff is a de-
generate random variable. Thus, a cutoff mechanism cannot disclose any non-redundant
information, consistent with Theorem 1. However, when the designer controls the al-
location, and allocates with different probabilities to different types (for example, by
running an auction), the cutoff becomes informative about the type of the winner, and
1 In a companion note, I extend this result to the case when the third party proposes an arbitrarymechanism in the aftermarket, without the restriction to binary actions.
1 Introduction 3
its disclosure influences the outcome of the aftermarket.
The results on optimality of cutoff mechanisms are illustrated with three examples,
where the aftermarket is, respectively, (i) a resale game in which the third party has
full bargaining power, (ii) a game in which the winner buys a complementary good
from a monopolist, and (iii) a resale game with adverse selection.
Finally, I consider cases when cutoff mechanisms fail to be optimal. In Theorem
5, I characterize optimal mechanisms for the case of co-modularity of the agent’s and
the third party’s preferences. I show that the optimal mechanism reveals information
about the type of the winner directly: The optimal (binary) signal partitions the type
space into low and high types, and discloses this information to the third party.
I also analyze a model in which preferences of the third party and the agent are
counter-modular, but the agent’s utility takes a more general form than that permitted
by Theorems 2 - 4. A cutoff mechanism is sometimes optimal, but is not optimal in
general. I derive robust payoff bounds on the performance of cutoff mechanisms. For
the case when the designer maximizes total surplus in the market, I show that a cutoff
mechanisms achieves nearly 90% of the payoff of the optimal mechanism in the worst
case. Similar (but weaker) bounds are established for revenue maximization, and for
an arbitrary objective function of the designer.
The rest of the paper is organized as follows. In Section 2, I formally introduce the
model. I study a version with one agent which is then extended to multiple agents in
Section 5. In Section 3, the results on pure information intermediation are presented.
Section 4 contains the main results on optimality of cutoff mechanisms, and Section 6
– the applications. In Section 7, I study optimal non-cutoff mechanisms, and derive
robust payoff bounds.
1.1 Related literature
A review of the literature on post-mechanism interactions is contained in the companion
paper Dworczak (2016), and I do not duplicate it here. I offer additional discussions
of related papers in the context of specific results throughout the paper. In particular,
my results on pure information intermediation are related to the model of Crawford
and Sobel (1982). I also compare my results to Calzolari and Pavan (2006) who study
an instance of a problem with one agent for which the unique optimal mechanism is
sometimes not a cutoff mechanism. I explain the relationship between their result and
my conditions for optimality in Section 7.
2 The baseline model 4
2 The baseline model
A mechanism designer is a seller who chooses a mechanism to sell an indivisible object
to an agent. The agent has a private type θ ∈ Θ. I normalize Θ = [0, 1], and assume
that θ is distributed according to a continuous full-support distribution F with density
f . If the agent acquires the good, she participates in the aftermarket which is an
interaction with a third party. The mechanism designer cannot contract with the third
party.
The market game consists of two stages: (1) implementation of the mechanism, and
(2) post-mechanism interaction between the agent and the third party (the aftermar-
ket). In the first stage, the seller chooses and publicly announces a direct mechanism
(x, π, t), where x : Θ→ [0, 1] is an allocation function, π : Θ→ ∆(S) is a signal func-
tion with some finite signal set S, and t : Θ → R is a transfer function.2 If the agent
reports θ, she receives the good with probability x(θ) and pays t(θ). Conditional on
selling the good, the designer draws and publicly announces a signal s ∈ S according to
distribution π(·| θ). For technical reasons, I assume that x(θ) must be right-continuous,
and π(s| θ) is measurable in θ. I call (x, π) a mechanism frame.
In the second stage, the third party observes the signal realization s, and Bayes-
updates her beliefs (knowing whether the agent acquired the good in the mechanism or
not). I let F s denote the updated belief over the agent’s type (conditional on the event
that the agent acquired the good and signal s was observed). The third party then takes
a binary decision a ∈ {l, h} to maximize the expectation of an upper semi-continuous
function va(θ) : Θ→ R,
a?(F s) = argmaxa∈{l, h}
ˆ 1
0
va(θ)dF s(θ).
When the third party is indifferent, it is assumed that the selection from the argmax
correspondence is made by the designer.
Although the aftermarket payoffs are formally determined by the decision of the
third party, this reduced-form specification allows for a non-trivial strategic interaction
between the agent and the third party. This is because the function va(θ) could be
determined endogenously in equilibrium. For example, if a denotes the price quoted by
the third party (high or low), the function va is derived from the optimal acceptance
2 Using a direct mechanism is without loss of generality, by the Revelation Principle, see for exampleMyerson (1982).
2 The baseline model 5
decision of the agent.
The agent’s payoff, net of transfers, conditional on acquiring the good in the mech-
anism is given by
Ua(θ) = ua θ + ca, (2.1)
where ua ≥ 0 and ca are constants. Linearity of Ua(θ) in θ is assumed for tractability.
The important assumption is that the action of the third party influences the slope of
the utility function of the agent. The agent’s payoff is normalized to zero if the agent
does not acquire the good. The agent’s final utility is quasi-linear in transfers, and the
agent is an expected-utility maximizer.
The mechanism designer’s ex-post utility is given by the function V a(θ) if the good
is allocated and the third party takes action a. If the good is not allocated, the payoff is
normalized to zero. For clarity of exposition, I assume that the designer always weakly
prefers the third party to take the high action, i.e.V h(θ) ≥ V l(θ), for all θ ∈ Θ. To
guarantee existence of solutions to all problems considered in this paper, I assume that
V h(θ), V l(θ), and V h(θ) − V l(θ) are all upper semi-continuous in θ. The mechanism
designer maximizes expected utility.3
To avoid trivial cases, unless explicitly stated, I assume that uh 6= ul, and that
vh(θ)−vl(θ) takes both strictly positive and strictly negative values, on sets of points of
non-zero measure (otherwise, the choice of the action by the third party is independent
of beliefs about θ).
2.1 Implementability
Because the action of the third party is binary, it is without loss of generality to assume
that the signal sent by the optimal mechanism is also binary (see for example Myer-
son, 1982). Moreover, the signal can be labeled by the action that it induces, that is,
S = {l, h}. I denote q(θ) = π(h| θ). From now on, a mechanism frame is represented
by the pair (x, q), where x is the allocation rule, and q(θ) is the conditional probabil-
ity of recommending the high action conditional on type θ acquiring the good in the
mechanism.
Definition 1. A mechanism frame (x, q) is implementable if there exist transfers t such
that the agent participates and reports truthfully in the first-stage mechanism, taking
3 The designer’s utility does not depend explicitly on transfers. However, under quasi-linear utility,transfers in the mechanism are uniquely pinned down by the final allocation of the good, and thereforethe setting allows for objectives such as expected revenue maximization.
2 The baseline model 6
into account the continuation payoff from the aftermarket,
Uh(θ)q(θ)x(θ) + U l(θ)(1− q(θ))x(θ)− t(θ) ≥ 0, (IR)
θ ∈ argmaxθ
{Uh(θ)q(θ)x(θ) + U l(θ)(1− q(θ))x(θ)− t(θ)}, (IC)
for all θ ∈ Θ, and the third party obeys the recommendations of the mechanism,
ˆ 1
0
[vh(θ)− vl(θ)]q(θ)x(θ)f(θ) ≥ 0, (OBh)
ˆ 1
0
[vh(θ)− vl(θ)](1− q(θ))x(θ)f(θ) ≤ 0. (OBl)
The tractability of the binary model stems from the fact that the (IR) and (IC)
constraints admit a simple representation (which is not typically the case in the presence
of an aftermarket). By a standard argument,4 it can be shown that there exists a
transfer function t such that (IR) and (IC) are satisfied if and only if
q(θ)x(θ)uh + (1− q(θ))x(θ)ul is non-decreasing in θ. (M)
Fact 1. A mechanism frame (x, q) is implementable if and only if conditions (M),
(OBh), and (OBl) all hold.
An interesting consequence of Fact 1 is that a decreasing allocation rule x may be
implemented (in contrast to the classical setting of Myerson, 1981) if information is
disclosed in an appropriate way. This is possible whenever uh 6= ul, that is, whenever
the agent’s types have differential preferences over the actions of the third party in the
aftermarket. For example when uh = 0 and ul > 0, implementability is equivalent to
monotonicity of (1 − q(θ))x(θ), and therefore x(θ) can be decreasing as long as this is
offset by a sufficient increase in 1− q(θ), the probability of sending the low signal.
I say that (x, q) reveals no information if the distribution of actions (conditional
on each θ) induced by (x, q) in the aftermarket could also be induced by a mechanism
(x, x) that sends a redundant (constant) message with probability one.
Note that a no-information-revealing mechanism may still influence the beliefs in
the aftermarket because the third party conditions on the event that the agent acquired
4 See for example Myerson (1981) or the companion paper.
2 The baseline model 7
the good. If x is non-constant, the posterior belief will be different from the prior f
even if the mechanism does not send any explicit signals.
2.2 Cutoff mechanisms
Cutoff mechanisms were defined and discussed in the companion paper Dworczak
(2016).5 I restate the definition in the context of the binary model.
Definition 2. A mechanism frame (x, q) is a cutoff rule if x is non-decreasing, and the
signal function q can be represented as
q(θ)x(θ) =
ˆ θ
0
γ(c)dx(c), (2.2)
(1− q(θ))x(θ) =
ˆ θ
0
(1− γ(c))dx(c), (2.3)
for each θ ∈ Θ, for some measurable signal function γ : Θ→ [0, 1].
A mechanism (x, π, t) is a cutoff mechanism if (x, π) is a cutoff rule.
Definition 2 has the following interpretation. Any non-decreasing allocation rule x
can be extended to a cumulative distribution function on the type space Θ. Let cx be
the random variable (the cutoff) with realizations in Θ and with distribution x. Then,
the allocation rule x can be implemented by drawing a realization c of the random
cutoff cx, and giving the good to the agent if and only if the reported type exceeds
c. In a cutoff mechanism, the signal distribution is determined (through the function
γ) by the realization of the random cutoff cx representing the allocation rule x. That
is, conditional on the cutoff, the signal is independent of the report of the agent. The
signal in a cutoff mechanism can be an arbitrary garbling of cx. In the binary model, it
is without loss of generality to focus on binary signals. Therefore, the signal function γ
is one-dimensional: γ(c) is the probability of recommending the high action conditional
on cutoff realization c.
Using the results from the companion paper, I provide the following alternative
characterization of cutoff mechanisms, which is often easier to work with.
Fact 2. A mechanism frame (x, q) is a cutoff rule if and only if x(θ)q(θ) and x(θ)(1−q(θ)) are both non-decreasing in θ.
5 The reader is referred to the companion paper for a discussion of why cutoff mechanisms are aninteresting class to study. This paper takes the class as given, and analyzes conditions under whichcutoff mechanisms are optimal, without any reference to their additional desirable properties.
3 Pure information intermediation 8
2.3 Preferences
The properties of optimal mechanisms will depend primarily on monotonicity properties
of the objective function and the constraints. The following definitions play a key role
in the analysis.
Definition 3. A function Φ : {l, h} × Θ → R is submodular if Φh(θ) − Φl(θ) is non-
increasing in θ, and is supermodular if Φh(θ)− Φl(θ) is non-decreasing in θ.
Definition 4. The function Ua(θ) = ua θ + ca is strongly submodular if uh = 0 and
ul > 0, and is strongly supermodular if uh > 0 and ul = 0.
Strong submodularity implies submodularity because the former property only re-
quires that uh ≤ ul.
Definition 5. The preferences of two players are co-modular if their respective util-
ity functions are either both submodular or both supermodular. If one of the utility
functions is supermodular and the other one is submodular, the preferences of the two
players are called counter-modular.
Co-modularity is a notion of aligned preferences, in the sense that players with co-
modular preferences agree on the direction of the single crossing condition with respect
to the action a and the type θ. Counter-modularity is a notion of misaligned preferences.
Definition 6. The preferences of two players are strongly co-modular (strongly counter-
modular) if they are co-modular (counter-modular), one of the players is the agent, and
the agent’s utility is strongly submodular or strongly supermodular.
Strong counter-modularity differs from counter-modularity only in that the agent’s
utility is assumed to take the strong from of sub/supermodularity (see Definition 4).
3 Pure information intermediation
To understand the role that co-modularity and counter-modularity play in determining
how much information a mechanism can reveal in an incentive-compatible way, I study
an auxiliary problem. I assume that x(θ) is constant in the type θ so that the designer
engages in pure information intermediation: she elicits reports from the agent and sends
messages to the third party, without the possibility to influence the allocation.6 The
designer can still use transfers and has full commitment power.
6 It is without loss of generality to assume that x(θ) = 1, for all θ.
3 Pure information intermediation 9
Theorem 1. Suppose that x(θ) is constant, and the agent’s and the third party’s pref-
erences are counter-modular. Then, any implementable mechanism frame (x, q) reveals
no information.
The theorem implies that if the designer attempts to send non-redundant signals in
the mechanism, the agent will misreport making the signals uninformative. The only
mechanisms consistent with truth-telling are ones that reveal no information about the
type of the agent.
To gain intuition for Theorem 1, consider the case when the agent’s utility is super-
modular (then, the third party’s utility is submodular). High types of the agent have
a higher willingness to pay for signals that lead to a high action of the third party.
However, the third party has incentives to take a high action only when she believes
that the agent’s type is low, that is, after seeing a signal that is chosen more often
by low types. We get a contradiction: If the designer sets a relatively high price for
a signal that leads to a high action, then only high types want to choose that signal
but then that signal cannot lead to a high price. And if the price for the high signal is
relatively low, then all types choose it, and hence the signal is uninformative.
If counter-modularity is replaced with co-modularity, the incentives of the agent
and the third party become more aligned, and the mechanism designer can make a
non-trivial impact on the information structure of the aftermarket. An easy example is
the case in which the third party and the agent share the same supermodular objective
function – then, the mechanism designer can fully reveal the type of the agent in the
mechanism (by appropriately setting up transfers).7
Intuitively, the ability of the designer to release information in the model with a
constant allocation rule can come from two sources: commitment power and the use
of transfers. Commitment power means that the designer can commit to a signal
distribution as a function of the type of the agent. Transfers can be used to screen
types with different willingness to pay. The third party does not have these tools since
she simply takes a binary action. To understand the difference between counter- and
co-modularity, it is instructive to consider the (hypothetical) case in which the third
party is allowed to use commitment power and transfers. Formally, suppose that the
third party offers a mechanism with transfers to the agent. The mechanism is a menu of
distributions over actions of the third party offered at difference prices. The third party
7 Crawford and Sobel (1982) study a related model with cheap talk. That is, in their model,communication cannot be supported with transfers, and no player has commitment power. Theirresults on (im)possibility of communication have similar intuition.
4 Optimal mechanisms 10
maximizes the expectation of va(θ) but has no utility from the transfers (equivalently,
is required to run a budget-balanced mechanism).
When the preferences of the agent and the third party are counter-modular, it is easy
to show that the optimal mechanism elicits no information from the agent (hence, the
decision of the third party is constant). In other words, commitment power and transfers
are not useful in eliciting information in this case. However, when the preferences of
the agent and the third party are co-modular, the optimal mechanism divides the type
space into two intervals, and the high decision is taken when the type of the agent
lies in the high interval. In this case commitment power and transfers are effective in
eliciting information.
In the actual pure information intermediation problem, it is the designer who can use
commitment and transfers, rather than the third party. However, the intuition remains
the same: In the case of co-modularity, the mechanism can release information.
Theorem 1 can be extended to non-constant allocation rules that take the form of
a threshold rule.
Corollary 1. If the allocation rule x is a threshold rule, x(θ) = 1{θ≥θ?}, any imple-
mentable mechanism (x, q) reveals no information.
The corollary follows directly from Theorem 1 by considering the truncated type
space [θ?, 1]. Signals sent conditional on types in [0, θ?) play no role because these
types do not participate in the aftermarket.
The main conclusion of this section is that if the agent’s and the third party’s
preferences are counter-modular, then information disclosure is only possible when the
allocation rule is non-constant. This will be the crucial force behind optimality of cutoff
mechanisms.
4 Optimal mechanisms
Under the assumptions of Section 2, the mechanism designer solves
maxx, q
ˆ 1
0
(V h(θ)q(θ)x(θ) + V l(θ)(1− q(θ))x(θ)
)f(θ)dθ (4.1)
subject to (M), (OBl), and (OBh).
I introduce optimal mechanisms in two steps. In the first step, I consider a problem
in which the allocation rule x is fixed, and the designer optimizes over disclosure rules
4 Optimal mechanisms 11
(the crucial difference to Section 3 is that x is not necessarily constant in the type).
In the second step, I consider joint optimization. Throughout, I focus on sufficient
conditions for optimality of cutoff mechanisms. Alternative optimal mechanisms are
discussed in Section 7.
4.1 Optimal information disclosure
In this subsection, I fix a non-decreasing allocation rule x and optimize over disclosure
rules.8 The content of this subsection can be interpreted in two ways. First, solving
for the optimal disclosure rule q given an allocation rule x is an intermediate step
towards solving the full design problem. Second, x can be interpreted as an interim
expected allocation in a multi-agent mechanism. The results of this subsection will be
immediately applicable to designing optimal disclosure policies in auctions (see Sections
5 and 6).
When the allocation rule is fixed, the objective function (4.1) becomes (up to a
constant that does not depend on q)
maxq
ˆ 1
0
(V h(θ)− V l(θ)
)q(θ)x(θ)f(θ)dθ (4.2)
Theorem 2. Fix a non-decreasing allocation rule x. Suppose that (i) the agent’s and
the third party’s preferences are strongly counter-modular, and (ii) the designer’s and
the third party’s preferences are co-modular. Then, there exists a cutoff rule (x, q?)
such that q? is a solution to the problem (4.2) subject to (M), (OBl), and (OBh).
Assumption (i) of Theorem 2 is essential for the result. Its informal meaning is that
the structure of the aftermarket makes it “hard” to disclose information in the mech-
anism. Theorem 1 states that under counter-modularity of the agent’s and the third
party’s preferences, no information can be revealed by a mechanism with a constant
allocation rule. In a cutoff mechanism with a constant allocation rule, no information
can be revealed either because the cutoff corresponding to a constant allocation rule is
8 The results of this section easily generalize to the case of allocation rules that are not non-decreasing. The optimal solution in this case can be obtained by replacing x(θ) in the monotonicityconstraint (derived from condition M) by its lower monotone envelope, denoted x(θ), and defined as
x(θ) = sup{χ(θ) : χ(θ) ≤ x(θ), ∀θ, χ is non-decreasing}.
Because I give conditions for optimality of using a non-decreasing allocation rule in the next subsection,there is no need for me to consider this more general case.
4 Optimal mechanisms 12
a degenerate (deterministic) random variable. For a general allocation, a cutoff mech-
anism uses the allocation rule x as a “leverage” to elicit and reveal information about
the corresponding cutoff cx. As shown in the companion paper, this implies that a
cutoff rule is implementable regardless of the form of the aftermarket, that is, even if
the misaligned preferences in the aftermarket make it “hard” to disclose information in
the mechanism. In other words, information about the cutoff can always be disclosed
without violating incentive-compatibility of the mechanism.
Theorem 2 states that for counter-modular aftermarkets no information other than
the information about the cutoff is used at the optimal mechanism. When assumption
(i) fails, it is typically possible to reveal more information. Therefore, an optimal
mechanism may reveal more information than just information about the cutoff. Section
7 analyzes the structure of optimal mechanisms without assumption (i).
Assumption (ii), on the other hand, is a technical condition that simplifies solving
the problem. When the preferences of the third party and the mechanism designer are
co-modular, it is possible to improve any suboptimal mechanism by shifting probability
mass under q in the direction that increases the objective function (4.2) and preserves
the obedience constraint (OBh). This allows me to solve the problem by defining an
order (similar to first-order stochastic dominance) on the set of feasible mechanisms,
and arguing that the optimal mechanism must be a maximal point in that order. When
the preferences of the designer and the third party are counter-modular, the problem
becomes more difficult because no such order can be defined. Optimal control theory
can be used but the problem is intractable in many cases. This is because, unlike in
traditional mechanism design, the monotonicity constraint (M) will typically bind at
the optimal solution. In Section 6, I analyze several optimization problems in which
assumption (ii) is violated but a cutoff mechanism is nevertheless optimal.
4.2 Joint optimization
I turn attention to joint optimization over allocation and disclosure rules. Using The-
orem 2, I can provide the following sufficient conditions for optimality of cutoff mech-
anisms.
Theorem 3. Suppose that either
• the agent’s utility is strongly supermodular, the third party’s and the designer’s
utilities are submodular, and V l(θ) is non-decreasing, or
5 Multi-agent mechanisms 13
• the agent’s utility is strongly submodular, the third party’s and the designer’s
utilities are supermodular, and V h(θ) is non-decreasing.
Then, a cutoff mechanism is optimal for the problem (4.1) subject to (M), (OBl), and
(OBh).
Examples satisfying the assumptions of Theorem 3 are provided in Section 6 which
considers applications of the model.
The companion paper Dworczak (2016) establishes a general result about optimality
of no information disclosure in the class of cutoff mechanisms, when there is one agent in
the mechanism, and the designer optimizes jointly over allocation and disclosure rules.9
Thus, a corollary of Theorem 3 is that the optimal mechanism reveals no information
(in the sense defined in Subsection 2.1).
Corollary 2. Under the assumptions of Theorem 3, there always exists an optimal
mechanism that reveals no information.
The result should be properly interpreted. The definition of a no-information-
revealing mechanism is that the mechanism does not send informative signals. However,
even in the absence of explicit signaling, the choice of the allocation rule and the fact
that the trade took place will influence the aftermarket beliefs. Thus, the aftermarket
will in general impact the structure of the optimal mechanism.
Corollary 2 does not extend to the multi-agent setting. An optimal cutoff mechanism
often sends explicit signals when there are multiple agents. I consider this case in the
next subsection.
5 Multi-agent mechanisms
In this section, I extend the model to multi-agent symmetric settings. I maintain the
assumption that only the agent who acquires the object in the mechanism interacts in
the aftermarket.
There are N ex-ante identical agents, indexed by i = 1, 2, ..., N . Each agent i has a
privately observed type θi ∈ Θ. Types are distributed i.i.d. according to a full-support
distribution f on Θ ≡ [0, 1]. The payoff of the designer and the third party may depend
9 This result does not depend on the objective function and the form of the aftermarket, so it isimmediately applicable here.
5 Multi-agent mechanisms 14
on the type of the agent in the aftermarket but not on the agent’s identity. Therefore,
the utility functions V a(θ) and va(θ) take the same form as in the one-agent model.
Under these assumptions, it is without loss of generality to look at symmetric mech-
anisms. I consider Bayesian implementation.10 Symmetric N -agent mechanisms can be
represented by their reduced forms (x, π, t), where x : Θ → [0, 1], π : Θ → ∆(S),
and t : Θ → R are all one-dimensional functions, subject to the constraint that x is
feasible under f for some joint N -dimensional allocation rule x with∑N
i=1 xi(θ) = 1
for all θ ∈ ΘN :
x(θ) =
ˆ
ΘN−1
xi(θ, θ−i)∏j 6=i
f(θj)dθj. (5.1)
For non-decreasing interim allocation rules x, condition (5.1) is equivalent to the so-
called Matthews-Border condition:
ˆ 1
τ
x(θ)f(θ)dθ ≤ 1− FN(τ)
N, ∀τ ∈ [0, 1], (MB)
If the allocation rule x is not non-decreasing, condition (MB) is necessary but no longer
sufficient (see Matthews, 1984, and Border, 1991). I will consider the relaxed problem
with constraint (MB). Thus, the obtained solution is guaranteed to be feasible only if
the optimal x turns out to be non-decreasing (which will be the case in all subsequent
results).
As before, it is without loss of generality to look at binary signal spaces, S = {l, h}.Then, letting q(θ) denote the probability of sending the high signal conditional on type
θ and allocating the object, any mechanism can be represented by its reduced form
(x, q, t).
Under Bayesian implementation, conditions for implementability of (x, q) are for-
mally identical to those from the one-agent model. That is, equations (M), (OBl),
(OBh) together with (MB) fully characterize all implementable pairs (x, q).
Finally, the objective function of the mechanism designer is given by
N
ˆ 1
0
(V h(θ)q(θ)x(θ) + V l(θ)(1− q(θ))x(θ)
)f(θ)dθ (5.2)
which only differs from the one-agent objective function in that it is multiplied by N .
A symmetric mechanism in the multi-agent setting is defined as a cutoff mechanism
10 Under sufficient conditions for optimality of cutoff mechanisms, the optimal mechanism will bedominant-strategy incentive-compatible, by the results in the companion paper Dworczak (2016).
6 Applications 15
if its reduced form is a one-agent cutoff mechanism.
For a fixed interim allocation rule, we obtain a problem that is formally identical
to the one-agent problem considered in the previous sections (the additional constraint
MB only pertains to the allocation rule). Therefore, Theorem 2 applies without any
modifications (with x interpreted as an interim allocation rule). Theorem 3 can be
extended as well, although this requires a proof.
Theorem 4. Suppose that either
• each agent’s utility is strongly supermodular, the third party’s and the designer’s
utilities are submodular, and V l(θ) is non-decreasing, or
• each agent’s utility is strongly submodular, the third party’s and the designer’s
utilities are supermodular, and V h(θ) is non-decreasing.
Then, a cutoff mechanism is optimal for the problem (5.2) subject to (M), (OBl),
(OBh), and (MB).
6 Applications
In this section, I present three applications. In the first application, I explicitly solve
for optimal mechanisms. In the second and third application, I only show that the
optimal mechanism is a cutoff mechanism. Optimization in the cutoff class can then
be performed in a way analogous to that in the first application.
6.1 Resale
In the first example, the aftermarket is a resale game. Resale after auctions (or bilat-
eral trade) is a common phenomenon in financial over-the-counter markets, treasury
auctions, spectrum auctions, or art auctions.
The model is stylized to keep the binary structure assumed in this paper. There
are N agents, where N ≥ 1. The type θi of agent i is interpreted as the probability of
an initially unknown value for the object. The value can be high (vh) or low (vl), and
is learned by the agent upon acquiring the object. The third party has a value v for
holding the object, with v > vh > vl ≥ 0, and makes a take-it-or-leave-it offer to the
agent in the aftermarket.
Because of the binary value of the agent in the aftermarket, it is without loss of
generality to assume that the third party will offer a price vh or vl. Thus, this setting
6 Applications 16
admits the structure of the baseline binary model. Any agent i always accepts the
offer vh, and accepts offer vl only if her ex-post value is low, i.e. with ex-ante interim
probability 1 − θi. Thus, in equation (2.1) we have uh = 0, ch = vh, ul = (h − l),
cl = vl, i.e. agent’s utility is strongly submodular. The third party has vh(θ) = v − vhand vl(θ) = (1− θ)(v − vl), i.e. the third party’s utility is supermodular.
I will consider two objective functions of the mechanism designer: efficiency and
revenue. Under the assumption that V h(θ) is non-decreasing in θ, let γh be the smallest
number on [0, 1] such that V h(θ) ≥ 0 for all θ ≥ γh. To avoid trivial cases, I assume
that, ˆ 1
γh(vh(θ)− vl(θ))FN−1(θ)f(θ)dθ < 0. (6.1)
If condition (6.1) fails, it is optimal for the third party to take the high action under the
“myopically optimal” allocation rule – allocate to the agent with the highest type (or
to the only agent if N = 1) as long as that type exceeds the threshold γh above which
the objective function is non-negative. Because the high action is always preferred by
the designer to the low action, the mechanism that implements this myopically optimal
allocation rule and discloses no information achieves the upper bound on the payoff to
the mechanism designer, and is hence trivially optimal. Assumption (6.1) implies that
this upper bound is not achievable.
6.1.1 Socially optimal mechanisms
Suppose that the mechanism designer maximizes efficiency in the market. When the
resale price is high, the total surplus (conditional on allocating the good) is v, and
otherwise it is (1− θ)v+ θvh because only low-value agents resell for a low price. Thus,
V h(θ) = v, and V l(θ) = (1 − θ)v + θvh, i.e. the designer’s utility is supermodular. By
Theorem 2, a cutoff mechanism is optimal for any fixed non-decreasing allocation rule.
By Theorem 3, because V h(θ) is non-decreasing in θ, a cutoff mechanism is also optimal
for the joint optimization problem. By Corollary 2, the optimal mechanism reveals no
information when N = 1.
The rest of this subsection describes the optimal cutoff mechanism (which is also
the optimal mechanism overall) in three cases: optimization over disclosure rules, joint
optimization with one agent, and joint optimization with multiple agents.
Optimal information disclosure.
6 Applications 17
Claim 1. For a fixed non-decreasing (interim) allocation rule x, suppose that
ˆ 1
0
(θ(v − vl)− (vh − vl))x(θ)f(θ)dθ ≥ 0. (6.2)
Then, the socially optimal mechanism that implements x reveals no information. If
condition (6.2) fails, define xres as the smallest solution to the equation
ˆ 1
0
(θ(v − vl)− (vh − vl)) max{x(θ)− xres, 0}f(θ)dθ = 0. (6.3)
Then, the socially optimal mechanism is given by q(θ) = max{1− xres/x(θ), 0}.
When condition (6.2) holds, it is optimal not to send any signals in the mechanism
because in the absence of additional information, the third party takes the high action.
To understand the case when (6.2) fails, define θ?res by
θ?res = sup{θ ∈ Θ : x(θ) ≤ xres}. (6.4)
When x is continuous, xres = x(θ?res). The optimal mechanism recommends the low
action with conditional probability one when the good is allocated to a type θ below
θ?res, and with conditional probability xres/x(θ) when the good is allocated to a type
θ above θ?res. The intuition is that the mechanism has to exclude enough low types
from the high signal to induce the high action of the third party. This is possible when
x(θ) is non-constant and non-decreasing: The unconditional probability of sending a
high signal for type θ ≥ θ?res is equal to x(θ)− xres, so that higher types have a higher
probability of receiving a high price in the aftermarket. The unconditional probability of
sending the low signal is constant and (in general) non-zero for all types above θ?res. This
is necessary to keep the mechanism incentive-compatible. If the highest type received
the high price with probability one, low types would deviate and report a high type.
The non-zero probability of a low signal provides the necessary separation between low
and high types (only when the low signal is sent, high types have a strictly higher value
for winning the object).
Suppose that N ≥ 2, and x(θ) is the interim allocation rule FN−1(θ) corresponding
to an auction that allocates to the highest type. Then, the optimal disclosure rule is
to announce whether the second highest type was above or below θ?res. The optimal
mechanism is indeed a cutoff mechanism because the second highest type (the highest
competing type from the perspective of the winner) is the cutoff representing x. The
6 Applications 18
mechanism can be implemented by running a second price auction and announcing
whether or not the price paid by the winner exceeded a threshold p?res, where p?res is the
bid made by type θ?res in a monotone equilibrium of the auction.
Optimal mechanism for N = 1. When N = 1, joint optimization over allocation and
disclosure rules yields a mechanism that reveals no information, by Corollary 2.
Claim 2. When N = 1, one of the following two mechanisms is optimal:
(a) x(θ) = 1 and q(θ) = 0, for all θ,
(b) x(θ) = 1{θ≥θ}, and q(θ) = 1, where θ is defined by
Ef [θ| θ ≥ θ] =vh − vlv − vl
.
Mechanism (a) is optimal if and only if
Ef [θ] ≤v
v − vhF (θ).
The intuition for Claim 2 is straightforward given Corollary 2. The optimal mecha-
nism releases no information, so it either always induces the low price (case a) or always
induces the high price (case b). If the designer is not trying to affect the default price
in the aftermarket (which is low under assumption 6.1), it is optimal to always allocate
the object because the objective function is non-negative. To induce the high price, the
designer excludes low types from trading: The threshold θ is chosen so that the third
party is exactly indifferent between the high and the low price (and quotes the high
price).
Optimal mechanism for N > 1. Finally, I present results about optimal mechanisms
for the case of multiple agents.
Claim 3. Under assumption (6.1) which takes the form Ef [θ(1)N ] < (vh − vl)/(v −
vl), where θ(1)N denotes the first order statistic of (θ1, ..., θN), one of the following two
mechanisms is optimal when N > 1 :
6 Applications 19
(a) For some type θ? ∈ (0, 1)11
x(θ) =
(1/N)FN−1(θ?) θ < θ?
FN−1(θ) θ ≥ θ?
and
q(θ) = max
{1− (1/N)FN−1(θ?)
FN−1(θ), 0
},
(b) x(θ) = FN−1(θ)1{θ≥r?} and q(θ) = 1, where r? is defined by
Ef [θ(1)N | θ
(1)N ≥ r?] =
vh − vlv − vl
.
Mechanism (a) is optimal whenever the regularity condition (A.42) defined in Ap-
pendix A.7 holds.12
I focus on case (a) in the discussion. (Appendix A.7 contains the discussion of case
b.) The following indirect implementation of the optimal mechanism from case (a) is
possible. The designer names a price p?, and agents simultaneously accept or reject.
If all agents reject, the object is allocated uniformly at random. If exactly one agent
accepts, she gets the object at price p?. If more than one agent accept, the designer runs
a tie-breaking auction with reserve price p? among agents who accepted. The designer
only reveals whether the auction took place or not.
Intuitively, in order to induce a high price with positive probability, the designer
runs a two-step procedure, and announces whether the second step (the auction) was
reached. The auction is a signal of high value of the winner of the object. The price
p? is set in such a way that conditional on announcing that the auction took place, the
third party is indifferent between the high and low price (and quotes the high price).
To gain intuition for the optimal design of the allocation rule, suppose that no
agent accepts the initial price p?. Conditional on this event, the low price vl will be
offered in the second stage. To maximize the probability of resale (which is consistent
with social surplus maximization because the third party has a higher value than the
agent), the designer should allocate to the lowest type. However, incentive-compatibility
constraints make it impossible to allocate to low types more often than to high types.
Therefore, the mechanism allocates the object by a uniform lottery.
11 See equation (A.39) in Appendix A.7.12 The regularity condition is satisfied, for example, by all distributions F (θ) = θκ for κ > 0.
6 Applications 20
6.1.2 Profit-maximizing mechanisms
In this subsection, I derive the profit-maximizing mechanisms for the resale model. I
let J(θ) ≡ θ − (1 − F (θ))/f(θ) denote the virtual surplus function. I say that the
distribution F is regular when J(θ) is non-decreasing in θ. I denote
J = max{J ′(θ) : θ ∈ Θ},
and assume that J is well defined and finite.
Using the envelope formula, I can express information rents of an agent with type
θ as
U(θ) = U(0) + (vh − vl)ˆ θ
0
(1− q(τ))x(τ)dτ.
In the revenue-maximizing mechanism, U(0) = 0. Thus, transfers are uniquely pinned
down and the objective function of the designer can be shown to be
ˆ 1
0
[vhx(θ) + (vh − vl)(J(θ)− 1)(1− q(θ))x(θ)]f(θ)dθ.
Thus, we can take V h(θ) = vh and V l(θ) = vh − (vh − vl)(1− J(θ)). If the distribution
F is regular, V a(θ) is submodular.
Optimal information disclosure. I cannot apply Theorem 2 directly because the pref-
erences of the mechanism designer and the third party are not co-modular. However,
under additional regularity assumptions, the problem can be solved by applying optimal
control techniques.
Claim 4. Fix a non-decreasing and absolutely continuous (interim) allocation rule x.13
Further, suppose that F is regular, and that
vh − vlv − vl
≤ E[θ| θ ≥ θ?res] +1− θ?resJ
, (6.5)
where θ?res was defined in (6.4). Then, the highest expected revenue over all mechanisms
implementing x can be obtained by using a cutoff mechanism. Moreover, in this case, the
profit-maximizing mechanism coincides with the welfare-maximizing mechanism from
Claim 1. If F is the uniform distribution on [0, 1], condition (6.5) holds.
13 It is enough if x is absolutely continuous on {θ ∈ [0, 1] : x(θ) > 0}, i.e.x can be equal to zero insome initial interval.
6 Applications 21
Optimal mechanism for N = 1 Because the joint optimization problem is an optimal
control problem with two control variables and a monotonicity constraint, it is in general
difficult to solve. I focus on the case in which I can solve the problem by relaxing the
monotonicity constraint.14 I do not have to assume regularity of the distribution F but
instead (for technical reasons) I assume that the virtual surplus function J(θ) is convex.
Claim 5. Suppose that J is convex. Define θ by
Ef [θ| θ ≥ θ] =vh − vlv − vl
. (6.6)
If vl + (vh − vl)J(θ) ≤ 0 for all θ ≤ θ, then the profit-maximizing mechanism is given
by x(θ) = 1{θ≥θ}, and q(θ) = 1.
The optimal mechanism under the assumption of Claim 5 is a cutoff mechanism, and
thus, by Corollary 2, reveals no information. The allocation rule excludes just enough
low types from trading so that a high price is quoted in the aftermarket conditional on
trade in the mechanism.
Suppose for example that F is the uniform distribution on [0, 1], and vl = 0. Then, θ
defined by (6.6) is given by θ = 2vh/v − 1, and is contained in (0, 1] under assumption
(6.1). When vh/v ≤ 3/4, it is optimal to sell to all types above θ and reveal no
information. When vh/v > 3/4, so that θ > 1/2, it becomes harder to induce a high
price in the aftermarket, in the sense that the mechanism has to exclude even relatively
high types from trading. At some point, it becomes optimal not to induce a high price
at all, in which case the aftermarket has no influence on the agent’s value – the optimal
mechanism is identical to the one that would arise in the absence of an aftermarket (i.e.
the seller sells to all types with non-negative virtual surplus).
Optimal mechanism for N > 1 The optimal mechanism for the case N > 1 can be
quite complicated, and I only offer an informal discussion based on numerical solutions.
If the value of the third party v is relatively large relative to vh (fixing vl), so that it
is relatively easy to induce a high price in the aftermarket, the optimal mechanisms
will often take the form analogous to that in Claim 5. The seller runs an auction in
which the good is allocated to the highest type subject to a reserve price. The reserve
price is chosen in such a way that conditional on allocating the object the third party
14 In other cases, the monotonicity constraint will bind at the optimal solution, and standard optimalcontrol techniques cannot be applied, especially that one needs to allows for jumps in the variables.
6 Applications 22
is indifferent between offering a high and a low price (and offers a high price). No
additional information is revealed.
However, when vh is relatively close to v (fixing vl), the reserve price would have
to be very high in order to induce a high price in the aftermarket. There are cases
in which the optimal mechanism is still a cutoff mechanism but sends explicit (binary
signals). The optimal allocation rule in this case is similar to that from point (a) of
Claim 3 but can feature multiple regions of either (i) exclusion from trade, (ii) uniform
randomization, and (iii) allocation to the highest type.
6.2 Acquiring a complementary good
In this subsection, I consider a different example of an aftermarket game. After the
agent acquires the object in the mechanism, she can buy a complementary good from
a monopolist (third party). The combined value for holding both objects is high (vh)
or low (vl), with ex-ante probability αθ+ β, and 1−αθ− β, respectively, where α > 0,
β ≥ 0 and α + β ≤ 1. If the agent fails to acquire the complementary good in the
aftermarket, she gets a reservation value r ≥ 0. It is assumed that r < vl < vh. The
monopolist quotes an optimal monopoly price given the information revealed by the
first-stage mechanism.
In this setting, the third party quotes either a price vl−r or vh−r. The high action
corresponds to high probability of trade in the aftermarket, i.e. a low price quoted by
the third party. Thus, uh = α(vh − vl), ch = r+ β(vh − vl), ul = 0, and cl = r. Agent’s
utility is strongly supermodular. Moreover, vh = vl − r and vl = (αθ + β)(vh − r), so
the third party’s utility is submodular.
Socially optimal mechanisms Efficiency maximization corresponds to V l(θ) = (αθ+
β)vh + (1−αθ−β)r, and V h(θ) = (αθ+β)vh + (1−αθ−β)vl. The designer’s utility is
submodular. Theorem 2 implies that a cutoff mechanism is optimal when the designer
optimizes over disclosure rules for any fixed allocation rule. Theorem 3 and Theorem
4 imply that a cutoff mechanism is optimal for the joint design problem, regardless of
the number of participating agents.
Profit-maximizing mechanisms By an analogous derivation as in Subsection 6.1, we
obtain V h(θ) = r+β(vh−vl)+α(vh−vl)J(θ), V l(θ) = r. Thus, utility is supermodular as
long as the distribution F is regular. I cannot directly apply Theorems 2 - 4. However,
7 Counterexamples and robust payoff bounds 23
under regularity conditions similar to those in Claim 4 and Claim 5, a cutoff mechanism
can be shown to be optimal.
6.3 Lemons market
Suppose that the third party has value v(θ) for holding the asset, where v(θ) is a
continuous non-decreasing function. I assume that v(θ) ≥ θ, i.e. it is always socially
efficient for the third party to be the final holder of the good. For simplicity, and to
keep the binary structure of the model, I assume that the price in the aftermarket is
fixed at p. It is without loss of generality for a model with fixed allocation x to assume
that p = 1.15 The third party decides whether to trade or not, taking into account the
adverse selection problem.
In this setting, the high action corresponds to the decision of the third party to
trade. For the agent, we have uh = 0, ch = 1, ul = 1, and cl = 0 – utility is strongly
submodular. The utility of the third party is given by vh(θ) = v(θ)− p, vl(θ) = 0, and
is thus supermodular.
Socially optimal mechanisms If the mechanism designer wants to maximize effi-
ciency, we have V h(θ) = v(θ), V l(θ) = θ. If v(θ)− θ is non-decreasing (this is case for
example when v(θ) = κθ for some κ > 1, or when v(θ) = θ + δ for some δ > 0), then a
cutoff mechanism is optimal, by Theorems 2 - 4.
Profit-maximizing mechanisms To maximize revenue, the designer would set V h(θ) =
p, V l(θ) = J(θ). Theorem 2 cannot be applied directly but sufficient conditions can be
derived for a cutoff mechanism to be optimal, in a fashion similar to Claims 4 and 5.
7 Counterexamples and robust payoff bounds
In this section, I analyze cases in which the assumption of strong counter-modulairty of
the agent’s and the third party’s preferences fails. First, I consider the effects of relaxing
counter-modularity altogether, and show that when the preferences of the agent and
the third party are co-modular, more information about the agent can be revealed by
the mechanism, compared to a cutoff mechanism. Second, I consider the consequences
of relaxing strong counter-modularity to counter-modularity.
15 Types above p are irrelevant because they never resell. It is thus without loss of generality toassume that all types are below p, or that p = 1.
7 Counterexamples and robust payoff bounds 24
Fig. 7.1: A Cutoff Mechanisms versus a Partitional Mechanism
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
θ
A Binary Cutoff Mechanism
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
θ
A Binary Partitional Mechanism
q(θ)x(θ)(1-q(θ))x(θ)x(θ)
High Action
HighAction
Low Action Low Action
7.1 Relaxing counter-modularity
The following definition introduces a class of mechanisms that will emerge as optimal
under co-modularity of the agent’s and third party’s preferences.
Definition 7. A mechanism frame (x, q) is a partitional mechanism if the signal is a
deterministic function of the type: q(θ) ∈ {0, 1}, for all θ ∈ Θ.
A partitional mechanism defines a binary partition of the type space, with the two
sets in the partition corresponding to the two possible actions of the third party. In
an implementable partitional mechanism, the partition consists of two intervals. A
partitional mechanism is typically not a cutoff mechanism, and vice versa (the only
exceptions are boundary cases, for example, a mechanism that makes no announcements
is both a partitional and a cutoff mechanism). Figure 7.1 contrasts a typical shape of
a cutoff and a partitional mechanism.
Co-modularity of the agent’s and the third party’s utilities can be interpreted as a
notion of aligned preferences. It is thus easier to support information disclosure in the
aftermarket. As a result, the optimal mechanism reveals more information about the
type of the agent, in the sense that the posterior belief is a partition.
Theorem 5. Fix any allocation rule x. If (i) the agent’s and the third party’s prefer-
ences are strongly co-modular and (ii) the designer’s and the third party’s preferences
are co-modular, then the optimal mechanism is a partitional mechanism.
7 Counterexamples and robust payoff bounds 25
Because Theorem 5 applies to any allocation rule (including allocation rules that
are not non-decreasing), its assumptions are also sufficient for optimality of partitional
mechanisms in the joint design problem, with one or multiple agents. Conditions (sim-
ilar to those in Theorems 3 and 4) can be provided under which a non-decreasing
allocation rule is optimal for the joint design problem.
The following example illustrates Theorem 5.
Example 1. A mechanism designer allocates a license that the agent must have in
order to pursue a project. Moreover, for the project to succeed, the agent needs to
receive external funding from the third party. If both of these elements are provided,
then the project succeeds with probability θ equal to the type of the agent. Otherwise,
the project fails for sure. If the project succeeds, it generates a payoff of 1.
In the mechanism, the designer allocates the license to the agent. In the aftermarket,
the third party takes a binary decision: invest or not. Investing incurs a sunk cost c but
in the event of a success, the third party receives a fraction β of the generated surplus.
The remaining fraction 1− β goes to the agent.
The designer maximizes the profit of the agent (or, equivalently, tax revenues for a
fixed tax rate τ) conditional on allocating the license, and receives a (possibly negative)
payoff π if the license is not allocated.
In this setting, uh = 1 − β, ul = 0, so the agent’s utility is strongly supermodular.
The designer’s objective is thus also supermodular. For the third party, vh(θ) = βθ− cand vl(θ) = 0, so the utility is supermodular. Theorem 5 implies that a partitional
mechanism is optimal.
Suppose that π is sufficiently negative so that the designer always allocates the
license. Then, the optimal mechanism charges a fee for the license to agents above a
threshold type α? but, in exchange, sends a message that induces the third party to
invest in the project. Agents below α? receive the license for free but the mechanism
sends a message that induces the third party not to invest. This simple communication
mechanism is possible because, unlike in the case of counter-modularity, higher types
of the agent have a higher willingness to pay for the high signal. At the same time, the
third party’s preferences are aligned – she is willing to take the high action when the
agent’s type is high.
7 Counterexamples and robust payoff bounds 26
7.2 Relaxing strong counter-modularity
In this section, I relax the assumption that the agent’s utility is strongly counter-
modular. Using an example, I show that the optimal mechanism may sometimes lie
outside of the cutoff class. However, I show that a cutoff mechanism is typically close
to optimal, in the sense that it achieves a large fraction of the optimal payoff in the
worst case.
The model I study is a simple variation of the resale model of Subsection 6.1 with
one agent. The only difference is that the aftermarket takes place with probability
λ ≤ 1. That is, with probability 1− λ, the third party is absent, and the agent enjoys
the utility of being the final holder of the asset. This may seem to be an innocuous
difference but it actually changes the structure of the optimal solution. In particular,
with λ = 1, it is never necessary to reveal information in the optimal mechanism but
with λ < 1 this is no longer true – sometimes the optimal mechanism will send explicit
signals.
With λ < 1, agent’s utility is still submodular but no longer strongly submodular.
To simplify notation, I will denote by E(θ) = θvh + (1 − θ)vl the expected value of
the agent in the absence of the aftermarket. We have Uh(θ) = (1− λ)E(θ) + λvh, and
U l(θ) = E(θ). Therefore, Uh(θ)−U l(θ) is non-increasing, as required by the definition
of submodularity, but because uh = (1− λ)(vh − vl) and ul = vh − vl, the utility is not
strongly submodular.
The mechanism designer maximizes the expectation of some generic function V a(θ),
for a ∈ {l, h} (which will typically depend on λ). I assume that V l(θ) and V h(θ) are
both non-decreasing. I define γa ∈ [0, 1] as the (largest) point at which V a(θ) = 0, for
a ∈ {l, h}.16 I assume that
ˆ 1
γh
(vh(θ)− vl(θ))f(θ)dθ < 0. (7.1)
If assumption (7.1) fails, the optimal mechanism is trivial: Allocate to all types above
γh and reveal no information (this is a special case of condition 6.1 with N = 1). I
define α? as the smallest solution to
ˆ 1
α
(vh(θ)− vl(θ))f(θ)dθ = 0. (7.2)
16 If V a(θ) does not cross zero at all, then γa = 0 if V a(θ) ≥ 0, and γa = 1 if V a(θ) ≤ 0.
7 Counterexamples and robust payoff bounds 27
That is, when the good is allocated only to types above α?, it is optimal for the third
party to offer a high price (assumption 7.1 implies that γh < α?).
Proposition 1. Under the above assumptions, the optimal mechanism takes one of the
two possible forms:
1. x(θ) = 1{θ≥γl}, q(θ) = 0 (no information is revealed and a low price is quoted in
the aftermarket),17
2.
x(θ) =
0 θ < γl,
1− λ γl ≤ θ < α?,
1 α? ≤ θ,
and q(θ) = 1{θ≥α?}.
Mechanism 1 is optimal if and only if
ˆ 1
γl
V l(θ)f(θ)dθ ≥ (1− λ)
ˆ α?
γl
V l(θ)f(θ)dθ +
ˆ 1
α?V h(θ)f(θ)dθ.
Mechanisms 1 is a cutoff mechanism but mechanism 2 is not (in fact, mechanism
2 is a partitional mechanism). In mechanism 2, types below γl do not get the object,
types in [γl, α?] receive the object with probability 1− λ and always face a low resale
price in the aftermarket (whenever it takes place), and types in [α?, 1] receive the object
with probability 1 and always face a high resale price in the aftermarket (whenever it
takes place).
To better understand the intuition for optimality of mechanism 2, it is useful to
compare it to the optimal cutoff mechanism. In one one-agent settings, optimal cutoff
mechanisms reveal no information (see Corollary 2). Therefore, it is easy to derive the
structure of the optimal cutoff mechanism (I omit the proof).
Proposition 2. Under the above assumptions, the optimal cutoff mechanism is either
mechanism 1 from Proposition 1, or mechanism 3: x(θ) = 1{θ≥α?}, q(θ) = 1 (no
information is revealed and a high price is quoted in the aftermarket) with α? defined
by (7.2).
17 For some parameters this mechanism is not feasible because a high price is optimal for the thirdparty when γl > α?. However, in this case mechanism 1 is going to be dominated by mechanism 2.
7 Counterexamples and robust payoff bounds 28
Mechanism 3 from Proposition 2 and mechanism 2 from Proposition 1 coincide
when λ = 1. However, mechanism 2 dominates when λ < 1. Both mechanisms induce
a high resale price for types above α?. It is not possible to induce a high resale price for
types below α? because, by definition, α? is the smallest threshold that still induces a
high resale price. However, mechanism 2 allocates to types below α? and recommends
a low price, while the cutoff mechanism 3 does not allocate at all.
To understand the difference, suppose first that λ = 1. Both mechanisms allocate
to types above α? and reveal no information. When the aftermarket takes place with
probability one, and the high price is always quoted by the third party, the agent’s
endogenous value for acquiring the object is equal to the resale price vh. In particular,
the value no longer depends on the agent’s type θ. As a consequence, the designer has to
charge exactly vh for reporting a type above α?, leaving the agent with no information
rents (if the agent received a positive information rent, types below α? would have
an incentive to misreport to also receive a positive rent). It follows that it is not
incentive-compatible to offer the good and recommend the low price to agents below
α?. Otherwise, higher types would have to be offered a strictly positive information
rent, contradicting the above reasoning.
Suppose now that λ < 1. In this case, even when a high resale price is quoted in the
aftermarket, higher types of the agent have a strictly higher value for the object (driven
by the event that the aftermarket does not take place and the agent is the final holder
of the good). Thus, types in [α?, 1] receive a positive information rent, proportional
to the probability of the aftermarket not taking place, 1 − λ. As a consequence, the
designer can sell the good to types below α? but only with sufficiently small probability
so that higher types do not want to deviate to reporting a type below α?. In the optimal
mechanism 2, the good is offered exactly with probability 1− λ (the probability of the
aftermarket not taking place) to types below α?. A low price is recommended for these
types, so it remains optimal for the third party to quote a high price for types above
α?.18
Intuitively, the slope of the agent’s utility (as a function of the type) determines
the ability of the designer to screen types by offering the good with different probabil-
ities. When the utility is flat, it is impossible to screen, but as it gets steeper, more
information can be elicited and revealed. The slope is determined by the action of
18 This would not be possible in a cutoff mechanism because the probability of recommending thelow price would have to be non-decreasing in the type, by Fact 2. It is not non-decreasing: the lowprice is never recommended for types above α?.
7 Counterexamples and robust payoff bounds 29
the third party. When λ = 1, the utility function can sometimes have a zero slope
(when the high price is offered) making screening impossible. When λ < 1, the slope
is bounded from below by (1− λ)(vh − vl). The optimal mechanism fully exploits this
lower bound to screen the agent’s type by offering the good with different probabili-
ties. Cutoff mechanisms, by design, reveal information about the cutoff, and are hence
incentive-compatible regardless of the slope of the agent’s utility in the aftermarket.
However, this also means that cutoff mechanisms do not fully exploit the lower bound
on the slope when λ < 1, and are hence suboptimal.
Calzolari and Pavan (2006) consider a similar model to the one above, with three
differences: (i) they assume that the agent’s type is binary (I model it as a continu-
ous variable), (ii) they consider revenue maximization (I allow an arbitrary objective
function), and (iii) they consider a stochastic binary value of the third party but as-
sume that the aftermarket always happens (I assume that the third party has a fixed
value which is higher than the value of the agent but the aftermarket happens with
interior probability). The effect of a stochastic value of the third party is similar to
the effect of an interior probability of the aftermarket: Instead of assuming that the
aftermarket does not take place, I could assume that the value of the third party is
below the value of the agent. As a result, the optimal mechanisms from Proposition
1 have similar structure to the optimal mechanisms from Calzolari and Pavan (2006)
(with some differences stemming from the discrete versus continuous types space).
One interesting insight from Propositions 1 and 2 relative to Calzolari and Pavan
(2006) is that the optimal mechanism reveals information only when there is non-zero
probability that there will be no gains from trade after the mechanism: either because
the aftermarket does not happen (as in my model), or because the third party has a
low value (as in the model of Calzolari and Pavan, 2006).
7.2.1 Robust payoff bounds
Propositions 1 and 2 imply that a cutoff mechanism may sometimes be suboptimal.
They do not directly show how much value the designer may lose by using a mechanism
in this class instead of the optimal mechanism. Under the assumptions of the previous
subsection, I provide robust payoff bounds on the performance of cutoff mechanisms.
Proposition 3a. If the objective is to maximize total expected surplus, then for any
7 Counterexamples and robust payoff bounds 30
fixed λ ∈ [0, 1] the optimal cutoff mechanism achieves at least a fraction
1
1 + 12(√λ− λ)
of the social surplus of the optimal mechanism.
In particular, regardless of λ, f , l, h, v, a cutoff mechanism is guaranteed to yield
more than 88% of the optimal surplus. Moreover, a cutoff mechanism is optimal at
λ = 0 and at λ = 1 (see Figure 7.2).
Proposition 3b. If the objective is to maximize expected revenue, and r ≡ vh/vl, then
for any fixed λ ∈ [0, 1] the optimal cutoff mechanism achieves at least a fraction
max
{1
2− λ,
1
1− λ(1− r)
}.
of the expected revenue of the optimal mechanism.
For example, if r = 1.5, then a cutoff mechanisms achieves at least 75% of the
optimal expected revenue in all cases. For any finite r, the gap disappears close to
λ = 0 and λ = 1 (see Figure 7.2).
For a general objective function of the designer, a weaker bound can be provided.
Proposition 3c. For a general objective function of the mechanism designer, a cutoff
mechanisms achieves at least a fraction
1
2− λ
of the value of the optimal mechanism.
A cutoff mechanism is always optimal when the probability λ of the aftermarket
is 1. As the probability of the aftermarket vanishes, the optimal cutoff mechanism
achieves at least half of the value of the optimal mechanism. This bound is driven by
cases in which the value from the optimal mechanism goes to zero as λ → 0. If the
payoff of the optimal mechanism is bounded away from zero, then cutoff mechanisms
are trivially optimal in the limit as λ → 0 because when there is no aftermarket,
information disclosure does not influence the outcomes in the mechanism.
7 Counterexamples and robust payoff bounds 31
Fig. 7.2: Robust payoff guarantees from using a cutoff mechanism: social surplus (solidline), revenue for vh/vl ≤ 1.5 (thick dotted line) and arbitrary objective (thindotted line)
0 0.2 0.4 0.6 0.8 150%
60%
70%
80%
90%
100%
Probability of aftermarket λ
References
Border, Kim C., “Implementation of Reduced Form Auctions: A Geometric Ap-
proach,” Econometrica, 1991, 59 (4), 1175–1187.
Calzolari, Giacomo and Alessandro Pavan, “Monopoly with resale,” RAND Jour-
nal of Economics, 06 2006, 37 (2), 362–375.
Crawford, Vincent P and Joel Sobel, “Strategic Information Transmission,”
Econometrica, November 1982, 50 (6), 1431–51.
Dworczak, Piotr, “Mechanism Design with Aftermarkets: Cutoff Mechanisms,”
Working Paper 2016.
Matthews, Steven A., “On the Implementability of Reduced Form Auctions,” Econo-
metrica, 1984, 52 (6), 1519–1522.
Myerson, Roger B., “Optimal Auction Design,” Mathematics of Operations Research,
1981, 6 (1), 58–73.
, “Optimal coordination mechanisms in generalized principal-agent problems,” Jour-
nal of Mathematical Economics, June 1982, 10 (1), 67–81.
A Proofs 32
Seierstad, A. and Knut Sydsaeter, Optimal Control Theory with Economic Appli-
cations, Elsevier, North-Holland, 1987. Advanced Textbooks in Economics, volume
24.
A Proofs
A.1 Proof of Theorem 1
Since x is constant, constraint (M) becomes
q(θ)(uh − ul) is non-decreasing in θ.
Take for concreteness the case when the agent’s utility is submodular, ul > uh (the
other case is fully analogous, and uh = ul is ruled out by assumption). Then, q has to
be non-increasing. By condition (OBh), the mean value theorem for integrals, and the
assumption that v is supermodular,
0 ≤ˆ 1
0
(vh(θ)− vl(θ))q(θ)f(θ)dθ = q(0+)
ˆ α
0
(vh(θ)− vl(θ))f(θ)dθ,
for some α ∈ [0, 1]. If q(0+) = 0, then q ≡ 0 because q is non-increasing. Otherwise,
we have ˆ α
0
(vh(θ)− vl(θ))f(θ)dθ ≥ 0
which, by the fact that v is supermodular, implies that
ˆ 1
β
(vh(θ)− vl(θ))f(θ)dθ ≥ 0,
for all β ∈ [0, 1]. Then, by condition (OBl), and again by the mean value theorem,
0 ≥ˆ 1
0
(vh(θ)− vl(θ))(1− q(θ))f(θ)dθ = (1− q(1−))
ˆ 1
γ
(vh(θ)− vl(θ))f(θ)dθ.
for some γ ∈ [0, 1]. Unless´ 1
γ(vh(θ) − vl(θ))f(θ)dθ = 0, this implies that q(1−) = 1,
and because q is non-increasing, we must have q ≡ 1.
If´ 1
γ(vh(θ)− vl(θ))f(θ)dθ = 0, then because
´ α0
(vh(θ)− vl(θ))f(θ)dθ ≥ 0, and v is
supermodular, it must be that´ 1
0(vh(θ) − vl(θ))f(θ)dθ = 0. Since vh and vl are not
A Proofs 33
identical (vh 6= vl), q must be constant.
A.2 Proof of Theorem 2
I first assume that agent’s utility is strongly supermodular. That is, uh > 0, and ul = 0.
In this case, the assumptions of Theorem 2 imply that the third party’s and designer’s
utility functions are submodular. The optimal design problem takes the form
maxq
ˆ 1
0
[q(θ)V h(θ) + (1− q(θ))V l(θ)
]x(θ)f(θ)dθ (A.1)
subject to
0 ≤ q(θ) ≤ 1, ∀θ ∈ Θ, (A.2)
q(θ)x(θ) is non-decreasing in θ, (A.3)ˆ 1
0
vh(θ)q(θ)x(θ)f(θ)dθ ≥ˆ 1
0
vl(θ)q(θ)x(θ)f(θ)dθ, (A.4)
ˆ 1
0
vl(θ)(1− q(θ))x(θ)f(θ)dθ ≥ˆ 1
0
vh(θ)(1− q(θ))x(θ)f(θ)dθ. (A.5)
I consider two cases. First, suppose that
ˆ 1
0
vh(θ)x(θ)f(θ)dθ ≥ˆ 1
0
vl(θ)x(θ)f(θ)dθ, (A.6)
i.e. the third party takes the high action in the absence of any additional information.
Then, revealing no information (q ≡ 0) is a feasible mechanism, and it is optimal
because, by assumption, V h ≥ V l. In this case, the optimal mechanism does not re-
veal any information, and is thus trivially both a partitional and a cutoff mechanism
(q(θ)x(θ) ≡ 0, and (1− q(θ))x(θ) = x(θ) are non-decreasing).
Consider the opposite case in which condition (A.28) fails. Let y(θ) ≡ q(θ)x(θ). I
consider a relaxed problem without the constraint (A.5) (I will verify ex-post that this
constraint is satisfied at the solution of the relaxed problem). By deleting terms that
do not affect the value of the objective function we obtain
maxy
ˆ 1
0
[V h(θ)− V l(θ)
]y(θ)f(θ)dθ
A Proofs 34
subject to
0 ≤ y(θ) ≤ x(θ), ∀θ ∈ Θ,
y(θ) is non-decreasing in θ,ˆ 1
0
[vh(θ)− vl(θ)
]y(θ)f(θ)dθ ≥ 0.
Under assumptions of Theorem 2, define φ(θ) = V h(θ) − V l(θ), ψ(θ) = vh(θ) − vl(θ),and apply Lemma 1 found in Appendix A.13. Because V and v are submodular,
φ and ψ are non-increasing. Thus, the optimal mechanism takes the form y(θ) =
min{x(θ), x} for some x ∈ [0, 1]. Because both q(θ)x(θ) and (1 − q(θ))x(θ) = x(θ) −y(θ) = max{x(θ) − x, 0}, are non-decreasing, this corresponds to a cutoff mechanism.
The problem becomes
maxx
ˆ 1
0
[V h(θ)− V l(θ)
]min{x(θ), x}f(θ)dθ
subject to ˆ 1
0
[vh(θ)− vl(θ)
]min{x(θ), x}f(θ)dθ ≥ 0.
We know that x = maxθ{x(θ)} is not feasible, and that the objective function is non-
decreasing in x. Thus, the constraint will bind at the optimal solution:
ˆ 1
0
[vh(θ)− vl(θ)
]min{x(θ), x}f(θ)dθ = 0.
Finally, I have to verify that the solution to the relaxed problem is feasible for the
original problem, i.e. that
ˆ 1
0
vl(θ)(1− q(θ))x(θ)f(θ)dθ ≥ˆ 1
0
vh(θ)(1− q(θ))x(θ)f(θ)dθ.
We know that ˆ 1
0
[vl(θ)− vh(θ)]x(θ)f(θ)dθ > 0.
A Proofs 35
From the above,
0 <
ˆ 1
0
[vl(θ)− vh(θ)]x(θ)f(θ)dθ
=
ˆ 1
0
[vl(θ)− vh(θ)]y(θ)f(θ)dθ︸ ︷︷ ︸0
+
ˆ 1
0
[vl(θ)− vh(θ)](x(θ)− y(θ))f(θ)dθ
=
ˆ 1
0
[vl(θ)− vh(θ)](1− q(θ))x(θ)f(θ)dθ, (A.7)
so constraint (A.5) holds. Thus, a cutoff mechanism is optimal.
I now assume that agent’s utility is strongly submodular: uh = 0, and ul > 0. In
this case, the assumptions of Theorem 2 imply that the third party’s and designer’s
utility functions are supermodular. I only consider the case when
ˆ 1
0
vh(θ)x(θ)f(θ)dθ <
ˆ 1
0
vl(θ)x(θ)f(θ)dθ. (A.8)
(The other case can be handled exactly as in the case of supermodular agent’s utility.)
The design problem takes the form (A.1) - (A.5) but with (A.3) replaced by
(1− q(θ))x(θ) is non-decreasing in θ.
Let y(θ) ≡ (1 − q(θ))x(θ). I consider a relaxed problem without the constraint (A.5).
By deleting terms that do not affect the value of the objective function we obtain
maxy
ˆ 1
0
[V l(θ)− V h(θ)
]y(θ)f(θ)dθ
subject to
0 ≤ y(θ) ≤ x(θ), ∀θ ∈ Θ,
y(θ) is non-decreasing in θ,ˆ 1
0
[vl(θ)− vh(θ)
]y(θ)f(θ)dθ ≥
ˆ 1
0
[vl(θ)− vh(θ)
]x(θ)f(θ)dθ.
Define φ(θ) = −(V h(θ) − V l(θ)), ψ(θ) = −(vh(θ) − vl(θ)). When V and v are super-
modular, φ and ψ are non-increasing. We can thus apply Lemma 1 from Appendix
A.13. The rest of the proof is identical to the proof in the previous case.
A Proofs 36
A.3 Proof of Theorem 3
The strategy for the proof is to show that the optimal allocation rule x has to be
non-decreasing. Then, the conclusion of Theorem 3 will follow from Theorem 2.
First, I consider the case when the agent’s utility is strongly supermodular. By
inspection of the proof of Theorem 2, in the optimal solution, y(θ) ≡ q(θ)x(θ) has to
be non-decreasing. I consider an auxiliary problem in which y is fixed, and I optimize
over x – at the joint optimal solution, x must be optimal for a fixed y. The problem is
maxx
ˆ 1
0
V l(θ)x(θ)f(θ)dθ,
subject to
y(θ) ≤ x(θ) ≤ 1, ∀θ,
By assumption, V l(θ) is non-decreasing. Thus, the optimal x pushes all mass to the
right, that is, x?(θ) = y(θ) for all θ < θ?, and x(θ) = 1 for all θ ≥ θ? (formally, I
apply the same argument that was used in the proof of Theorem 2). In particular, the
optimal x is non-decreasing. The conclusion now follows by applying Theorem 2.
Now I consider the case when the agent’s utility is strongly submodular. By inspec-
tion of the proof of Theorem 2, in the optimal solution, y(θ) ≡ (1− q(θ))x(θ) has to be
non-decreasing. In this case, the auxiliary problem for a fixed y becomes
maxx
ˆ 1
0
V h(θ)x(θ)f(θ)dθ,
subject to
y(θ) ≤ x(θ) ≤ 1, ∀θ,ˆ 1
0
(vh(θ)− vl(θ))x(θ)f(θ)dθ ≥ˆ 1
0
(vh(θ)− vl(θ))y(θ)f(θ)dθ.
Under the assumption of strong counter-modularity, because the agent’s utility is sub-
modular, vh(θ)− vl(θ) is non-decreasing. V h(θ) is also non-decreasing by assumption.
Thus, by the usual argument (applying Lemma 1) the optimal x pushes all mass to the
right, that is, x?(θ) = y(θ) for all θ < θ?, and x(θ) = 1 for all θ ≥ θ?. In particular, the
optimal x is non-decreasing. The conclusion now follows by applying Theorem 2.
A Proofs 37
A.4 Proof of Theorem 4
The proof is very similar to the proof of Theorem 3, so I omit some details. The only
difference is the presence of the additional constraint – the Matthews-Border condition
(MB). As before, I will show that the optimal allocation rule x has to be non-decreasing.
Then, the conclusion of Theorem 4 will follow from Theorem 2.
I will only consider the case when the agent’s utility is strongly supermodular (the
other one is analogous). At the optimal solution, y(θ) ≡ q(θ)x(θ) has to be non-
decreasing. Consider the problem of optimizing over x for a fixed non-decreasing y
(such that there exists at least one feasible x; otherwise such y cannot be part of the
optimal solution):
maxx
ˆ 1
0
V l(θ)x(θ)f(θ)dθ,
subject to
y(θ) ≤ x(θ) ≤ 1, ∀θ, (A.9)ˆ 1
τ
x(θ)f(θ)dθ ≤ 1− FN(τ)
N,∀τ ∈ [0, 1]. (A.10)
Notice that y must satisfy (A.10). Otherwise, because x must be point-wise higher than
y, no x would be feasible for the above problem. Conditions (A.9) and (A.10) jointly
imply that for any β ≤ τ ,
ˆ τ
β
y(θ)dθ +
ˆ 1
τ
x(θ)f(θ)dθ ≤ 1− FN(β)
N
Therefore, condition (A.10) can be sharpened to
ˆ 1
τ
x(θ)f(θ)dθ ≤ Γ(τ) ≡ minβ≤τ
[1− FN(β)
N−ˆ 1
β
y(θ)f(θ)dθ
]︸ ︷︷ ︸
g(β)
+
ˆ 1
τ
y(θ)f(θ)dθ
I denote by x(τ) the function that is obtained by solving
ˆ 1
τ
x(θ)f(θ)dθ = Γ(τ) (A.11)
point-wise for all τ (x is determined uniquely up to a zero-measure set of points). By
assumption, V l(θ) is non-decreasing. Thus, the optimal x? pushes all mass to the right
(formally, I apply the same argument that was used in the proof of Theorem 2). This
A Proofs 38
implies that there exists some threshold θ? such that x?(θ) = y(θ) for θ < θ? and
x?(θ) = x(θ) for θ ≥ θ?. Thus, to prove that x?(θ) is non-decreasing, I have to show
that x(θ) is non-decreasing and that it lies above y(θ).
Denote by β?(τ) = max{argminβ≤τ g(β)} the (largest) solution to the inner op-
timization problem in Γ(τ). We have either β?(τ) ∈ {0, τ} (boundary solution) or
FN−1(β?(τ)) = y(β?(τ)) (an interior solution satisfying the first-order condition). As
τ grows, larger values of β are feasible but the objective function does not depend
on τ , so β?(τ) is non-decreasing. When β?(τ) < τ in some neighborhood, then β?(τ)
does not depend on τ , and differentiating (A.11) yields x(θ) = y(θ). Similarly, when
β?(τ) = τ in some neighborhood, we conclude that x(θ) = FN−1(θ). In the case of a
boundary solution, the (left) derivative of g(β) at β = τ must be non-positive (oth-
erwise the optimal β would be smaller than τ), that is, FN−1(τ) ≥ y(τ). This shows
that x(θ) ≥ y(θ) for all θ. Finally, to show that x(θ) is non-decreasing, I have to show
that it is non-decreasing whenever it switches between y(θ) and FN−1(θ) (I have to rule
out the possibility of a downward jump). When x(θ) switches from y(θ) to FN−1(θ),
this follows from the argument shown above. Suppose that x(θ) switches from FN−1(θ)
to y(θ) at θ. Then, we must have β?(τ) = θ for all τ in some right neighborhood of
θ (because β?(θ) = θ, β?(τ) is non-decreasing and does not depend on τ when the
solution is interior). It follows that the first-order condition holds at β?(θ) = θ so that
FN−1(θ) = y(θ). Thus, x(θ) is in fact continuous at θ.
This proves that the optimal x?(θ) is non-decreasing. The conclusion now follows
by applying Theorem 2.
A.5 Proof of Claim 1
Theorem 2 implies that a cutoff mechanism is optimal. When condition (6.2) fails, it is
trivially optimal to reveal no additional information, as this will lead to a high action
in the aftermarket. When condition (6.2) fails, by inspection of the proof of Theorem
2, the optimal solution is given by y(θ) = min{x(θ), x}, where y(θ) = (1 − q(θ))x(θ),
and x is defined by
ˆ 1
0
[vh(θ)− vl(θ)] min{x(θ), x}f(θ)dθ =
ˆ 1
0
[vh(θ)− vl(θ)]x(θ)f(θ)dθ.
It can be seen that x defined above coincides with xres defined by (6.3).
Since y(θ) = (1−q(θ))x(θ) = min{x(θ), x}, we obtain q(θ) = max{1−xres/x(θ), 0}.
A Proofs 39
A.6 Proof of Claim 2
We know that a cutoff mechanism is optimal (by Theorem 3). Since the optimal cutoff
mechanism reveals no information (see Corollary 2), there are only two cases to consider:
either (i) y(θ) = x(θ) or (ii) y(θ) = 0, where y(θ) = (1− q(θ))x(θ).
In case (i), the optimization problem is
maxx
ˆ 1
0
V l(θ)x(θ)dθ
subject to
x(θ) is non-decreasing in θ.
The solution is thus trivially x(θ) = 1 for all θ (because V l(θ) = (1− θ)v + θvh ≥ 0).
In case (ii), the optimization problem is
v maxx
ˆ 1
0
x(θ)dθ
subject to
x(θ) is non-decreasing in θ,ˆ 1
0
[θ(v − vl)− (vh − vl)]x(θ)f(θ)dθ ≥ 0. (A.12)
Thus, the optimal solution takes the form x(θ) = 1{θ≥θ} for the smallest θ such that x
satisfies (A.12): ˆ 1
θ
[θ(v − vl)− (vh − vl)]f(θ)dθ = 0.
Solving for θ yields
Ef [θ] =vh − vlv − vl
.
The mechanism from case (i) yields (1− Ef [θ])v + Ef [θ]vh, while the mechanism from
case (ii) v(1− F (θ)). Therefore, mechanism (a) from Claim 2 dominates whenever
Ef [θ] ≤v
v − vhF (θ).
A Proofs 40
A.7 Proof of Claim 3
By Theorem 4, a cutoff mechanism is optimal. I let y(θ) = x(θ)(1 − q(θ)) denote the
probability that the good is allocated and a low price is recommended. In a cutoff
mechanism, by Fact 2, both x(θ) and y(θ) are non-decreasing in θ. I let φ(θ) ≡ (v −vh) − (v − vl)(1 − θ). I consider a relaxed problem (by omitting constraint OBl), and
then verify that the solution is feasible. The relaxed problem of maximizing surplus
takes the form:
maxx, y
{v
ˆ 1
0
x(θ)f(θ)dθ − (v − vh)ˆ 1
0
θy(θ)f(θ)dθ
}(A.13)
subject to
0 ≤ y(θ) ≤ x(θ) ≤ 1, ∀θ, (A.14)
x, y are non-decreasing, (A.15)ˆ 1
0
x(θ)φ(θ)f(θ)dθ ≥ˆ 1
0
y(θ)φ(θ)f(θ)dθ, (A.16)
ˆ 1
τ
x(θ)f(θ)dθ ≤ 1
N(1− FN(τ)), ∀τ ∈ [0, 1]. (A.17)
The objective function (A.13) is equal to per-agent total expected surplus.
I will solve the problem (A.13)-(A.17) in two steps. In the first step, I optimize over
y treating x as given. In the second, I optimize over x.
Step 1. Optimization over y for fixed x. For a fixed non-decreasing function x, the
first-step problem can be expressed as (terms in the objective not depending on y can
be omitted):
miny
ˆ 1
0
θy(θ)f(θ)dθ (A.18)
subject to
0 ≤ y(θ) ≤ x(θ), ∀θ, (A.19)
y is non-decreasing, (A.20)ˆ 1
0
x(θ)φ(θ)f(θ)dθ ≥ˆ 1
0
y(θ)φ(θ)f(θ)dθ. (A.21)
A Proofs 41
This problem has been solved in the proof of Theorem 2. The optimal solution takes
the form
y?(θ) =
x(θ) θ < θ?
x θ ≥ θ?, (A.22)
for some θ? ∈ [0, 1], and x ∈ [x−(θ?), x+(θ?)], where x−(θ?) and x+(θ?) denote the left
and the right limit of x at θ?, respectively. If x is continuous at θ?, x = x(θ?). Indeed,
y? crosses any other y satisfying (A.19), (A.20), and´ 1
0y(θ)f(θ)dθ =
´ 1
0y?(θ)f(θ)dθ
once and from above, so it is first-order stochastically dominated (in the sense defined
above) by any such y.
There are two cases, depending on the properties of the fixed function x. In case (1),
x satisfies´ 1
0x(θ)φ(θ)f(θ)dθ ≥ 0. Then y?(θ) = 0, for all θ (x = 0, θ? = 0), achieves
the global minimum. In case (2),´ 1
0x(θ)φ(θ)f(θ)dθ < 0, and x ∈ [x−(θ?), x+(θ?)] is
pinned down by a binding constraint (A.21):
ˆ 1
θ?(x(θ)− x)φ(θ)f(θ)dθ = 0.
If there are multiple (x, θ?) satisfying these restrictions, then it must be that x(θ) = x
in some (possibly one-sided) neighborhood of θ?, so y? is defined uniquely.
Step 2. Optimization over x. Having solved for the optimal y given x, in step 2, I
optimize over x. I proceed by finding the optimal x separately for cases (1) and (2)
defined above. At the end, I compare the two constrained optima to find the globally
optimal mechanism.
Case 1:´ 1
0x(θ)φ(θ)f(θ)dθ ≥ 0.
Because in this case a high price is always quoted in the second stage, the problem
(A.13) - (A.17) becomes
maxx
ˆ 1
0
x(θ)f(θ)dθ (A.23)
subject to
0 ≤ x(θ) ≤ 1, ∀θ, (A.24)
x is non-decreasing, (A.25)ˆ 1
0
x(θ)φ(θ)f(θ)dθ ≥ 0, (A.26)
A Proofs 42
ˆ 1
τ
x(θ)f(θ)dθ ≤ 1
N(1− FN(τ)), ∀τ ∈ [0, 1]. (A.27)
By a similar argument as before, an optimal x should first-order stochastically dominate
any x′ satisfying conditions (A.24), (A.25), and (A.27). Informally, optimality requires
that x “shifts mass as much as possible to the right,” subject to constraints. Thus, an
optimal x satisfies (A.27) with equality for all τ ≥ β, and is zero on [0, β], where β ≥ 0
is the smallest number such that constraint (A.26) holds. Either
ˆ 1
0
x(θ)φ(θ)f(θ)dθ ≥ 0, (A.28)
in which case β = 0, or β > 0 is defined by
ˆ 1
β
x(θ)φ(θ)f(θ)dθ = 0. (A.29)
Since x satisfies the Matthews-Border condition (A.27) with equality on [β, 1], it is in-
duced by a joint rule that gives the good to the agent with the highest type, conditional
on at least one agent having a type above β. That is
x(θ) =
0 θ < β
FN−1(θ) θ ≥ β.
If condition (A.28) holds, β = 0. With the above x, (A.28) is equivalent to Ef [θ(1)N ] ≥
(vh − vl)/(v − vl). Thus, under this condition, the optimal mechanism for case (1) is
an efficient auction with no information revelation.
If condition (A.28) does not hold, then β > 0, and the mechanism is an auction
with a positive reserve price and no information revelation.
Case 2:´ 1
0x(θ)φ(θ)f(θ)dθ < 0.
In case (2), problem (A.13) - (A.17), in a relaxed version, becomes
maxx, θ?, x
{ˆ θ?
0
[v − (v − vh)θ]x(θ)f(θ)dθ + v
ˆ 1
θ?x(θ)f(θ)dθ − x(v − vh)
ˆ 1
θ?θf(θ)dθ
}(A.30)
subject to
0 ≤ x(θ) ≤ x, ∀θ ≤ θ?, (A.31)
x ≤ x(θ) ≤ 1, ∀θ ≥ θ?, (A.32)
A Proofs 43
x is non-decreasing, (A.33)ˆ 1
θ?(x(θ)− x)φ(θ)f(θ)dθ ≥ 0, (A.34)
ˆ 1
τ
x(θ)f(θ)dθ ≤ 1
N(1− FN(τ)), ∀τ ∈ [0, 1]. (A.35)
The problem is relaxed because condition (A.34) should in fact be an equality. For
any fixed x and θ?, x should be maximized point-wise on [θ?, 1], which means that
Matthews-Border condition (A.35) will bind everywhere on [θ?, 1] (point-wise maxi-
mization in this interval does not interact with any other constraint). Thus, x(θ) =
FN−1(θ) for θ ∈ [θ?, 1].
Now, consider x on [0, θ?]. In the objective function, x multiplies the function
v − (v − vh)θ that is positive decreasing. Because x cannot be decreasing (due to
constraint A.33), the optimal x must be constant on [0, θ?], equal to some γ ≤ x such
that condition (A.35) is satisfied. Overall, the problem boils down to
maxγ≤x, θ?
{γ
ˆ θ?
0
[v − (v − vh)θ]f(θ)dθ − x(v − vh)ˆ 1
θ?θf(θ)dθ
}(A.36)
subject to ˆ 1
θ?(FN−1(θ)− x)φ(θ)f(θ)dθ ≥ 0, (A.37)
γ(F (θ?)− t) ≤ 1
N(FN(θ?)− tN), ∀t ∈ [0, F (θ?)]. (A.38)
Constraint (A.38) can only be binding at the ends of the interval because the function
on the left hand side is affine in t, and the function on the right is concave in t. Thus,
(A.38) becomes γ ≤ (1/N)FN−1(θ?). Because the objective function is increasing in γ,
it is optimal to set γ to its upper bound: γ = max(x, (1/N)FN−1(θ?)). The objective
is also strictly increasing in θ?. This means that constraint (A.37) must bind. Suppose
that x > (1/N)FN−1(θ?). Then, by decreasing x slightly, we increase the objective
function and preserve constraint (A.37). Thus, γ = x = (1/N)FN−1(θ?) at the optimal
solution. Because the objective function is increasing in θ?, the solution is obtained by
finding the highest θ? for which equation (A.37) binds, that is,
ˆ 1
θ?(FN−1(θ)− 1
NFN−1(θ?))φ(θ)f(θ)dθ = 0. (A.39)
A Proofs 44
Because we are in case (2), by assumption, (A.37) is violated with θ? = 0. Thus, θ? is
strictly positive, and hence x is also strictly positive.
Summarizing, the solution takes the form
x(θ) =
(1/N)FN−1(θ?) θ < θ?
FN−1(θ) θ ≥ θ?,
and y(θ) = (1/N)FN−1(θ?), for all θ. The functions x and y are easily seen to be
feasible for the original (unrelaxed) problem. From this, one can directly derive the
form of the optimal disclosure rule q from the statement of Claim 3 (point a).
The allocation rule x is implemented by giving the good to the highest type if the
highest type is above θ?, and allocating the object uniformly at random in the opposite
case. Suppose that the only information revealed by the mechanism is whether the
second highest type was below θ? (low signal) or above θ? (high signal). Then, from
the point of view of an agent with type θ, the probability of winning the object and low
signal being sent is equal to (1/N)FN−1(θ?). Thus, x and y are implemented through
this procedure. Moreover, this procedure corresponds to the indirect implementation
described in the discussion below Claim 3. The price p? can be set in such a way
that type θ? is exactly indifferent between accepting or rejecting. Because continuation
payoffs are monotone in the type, this means that exactly types above θ? accept the
offer p?. Then, whether the tie-breaking auction takes place or not depends on whether
the second highest type is below are above θ?.
Comparing case (1) and case (2).
Assumption (6.1) in the current setting means that Ef [θ(1)N ] < (vh − vl)/(v − vl).
When Ef [θ(1)N ] < (vh − vl)/(v − vl), then β > 0 in case (1), so the optimal mechanism
in case (1) can be implemented as an auction with a reserve price and no information
revelation. That mechanism corresponds to the mechanism described in point (b) in
Claim 3 (with r? = β). I have shown that the optimal mechanism is either the one form
case (1) (corresponding to point (b) in Claim 3) or the one from case (2) (corresponding
to point (a) in Claim 3). What remains to be shown is that the mechanism from case
(2) is optimal under a regularity condition to be defined.
Given the optimal mechanism for case (1), I will construct an alternative mechanism
that is feasible and yields a strictly higher value of objective (A.13) under a regularity
condition. This will mean that the mechanism from case (2) must be optimal.
Fix the optimal mechanism in case (1) with β > 0. Consider an alternative mecha-
A Proofs 45
nism, indexed by ε ≥ 0 with yε(θ) = ε, for all θ, and
xε(θ) =
ε θ < βε
FN−1(θ) θ ≥ βε,
where βε is defined by
ˆ 1
βε
(FN−1(θ)− ε)φ(θ)f(θ)dθ = 0. (A.40)
At ε = 0, β(0) = β > 0 (because β is defined by equation A.29), so for small ε, there
exists a strictly positive solution βε to equation (A.40). Intuitively, I constructed a
mechanism that takes a small step ε towards the optimal mechanism from case (2). For
small enough ε, constraint (A.17) holds, and constraint (A.16) is satisfied with equality
given that equation (A.40) holds. Thus, the pair (xε, yε) is feasible for small enough ε.
For ε = 0, (x0, y0) is the optimal solution for case (1). Therefore, it is enough to
show that the objective function (A.13) is strictly increasing in ε in the neighborhood
of ε = 0. Because the objective function is differentiable in ε (in particular, βε is
differentiable in ε by the implicit function theorem), it is enough to show that the
derivative is strictly positive at 0. Using the implicit function theorem to differentiate
βε using equation (A.40), the right derivative of (A.13) under the mechanism (xε, yε)
at ε = 0 can be shown to be
vF (β) + v
´ 1
βφ(θ)f(θ)dθ
φ(β)− (v − vh)
ˆ 1
0
θf(θ)dθ. (A.41)
Equation (A.29) defining β can be written as
v − vhv − vl
= 1− Ef [θ(1)N | θ
(1)N ≥ β] ≡ 1− θ(1)
β .
Given that vl > 0, we have (v − vh)/v < 1 − θ(1)β . Moreover, recalling that φ(θ) ≡
(v − vh) − (v − vl)(1 − θ), we have φ(θ) = (v − vl)[θ − θ(1)β ]. Using these relations, to
show that (A.41) is strictly positive, it is enough to show that
F (β) ≥´ 1
β(θ − θ(1)
β )f(θ)dθ
θ(1)β − β
+ (1− θ(1)β )
ˆ 1
0
θf(θ)dθ.
A Proofs 46
Rearranging terms, we get
θ(1)β − (θ
(1)β − β)(1− θ(1)
β )
ˆ 1
0
θf(θ)dθ − βF (β) ≥ˆ 1
β
θf(θ)dθ.
Using integration by parts, and rearranging again,
(1− θ(1)β )
[1 + (θ
(1)β − β)
ˆ 1
0
θf(θ)dθ
]≤ˆ 1
β
F (θ)dθ. (A.42)
If inequality (A.42) holds for all β ∈ [0, 1], I will say that the distribution F satisfies
the regularity condition. Under the regularity condition, I have shown that the mecha-
nism from case (1) cannot be optimal, therefore the mechanism from case (2) must be
optimal.
In the remainder of the proof, I show that F (θ) = θκ satisfies the regularity condition
for any κ > 0. I will show that a more restrictive inequality holds:
ˆ 1
β
F (θ)dθ − (1− θ(1)β, 2)
[1 + (1− β)
ˆ 1
0
θf(θ)dθ
]≥ 0,
where θ(1)β, 2 denotes the expectation of the first order statistic conditional on exceeding β
when N = 2 (the smaller N , the harder it is to satisfy A.42). By brute-force calculation,
one can check that the left hand side of the above inequality is a concave function of
β. Thus, it is enough to check that the inequality holds at the two endpoints. When
β = 0, we have
ˆ 1
0
F (θ)dθ − (1− θ(1)0, 2)
[1 +
ˆ 1
0
θf(θ)dθ
]=
1
1 + κ−(
1− 2κ
2κ+ 1
)(1 +
κ
κ+ 1
)= 0.
On the other hand, for β = 1, we have θ(1)β, 2 = 1, and the inequality is trivially satisfied.
Discussion – what if the regularity condition fails?
In this subsection, I briefly explain the trade-off between the optimal mechanism when
the regularity condition holds (point a of Claim 3), and the mechanism that may be
optimal when the condition fails (point b of Claim 3).
The former of the two mechanisms uses a non-trivial announcement policy. Using
two signals is beneficial because it allows to always allocate the object in the mechanism
while still inducing the high price in the aftermarket under the high signal. However,
A Proofs 47
in a cutoff mechanism, the low signal has to be sent for higher types with at least
the probability that it is sent for lower types (by Fact 2). Thus, the low signal is
sent with positive probability for types above the threshold θ?. Under the low signal,
these types will not resell with a relatively high probability (because the price in the
aftermarket is low, and the probability of high value is relatively high for these types).
An alternative mechanism is to only send the high signal (and hence always induce a
high price conditional on allocating the object) at the cost of not allocating the good to
low types. The comparison between the two mechanisms depends on the shape of the
distribution F . If F fails the regularity condition, the latter mechanism may sometimes
be optimal.
A.8 Proof of Claim 4
I drop the subscripts on x and θ? in the proof.
Deleting terms that do not depend on q, and letting y(θ) ≡ (1 − q(θ))x(θ), the
design problem can be written as
maxy
ˆ 1
0
[J(θ)− 1] y(θ)f(θ)dθ
subject to
0 ≤ y(θ) ≤ x(θ), ∀θ ∈ Θ,
y(θ) is non-decreasing in θ,ˆ 1
0
[(vh − vl)− θ(v − vl)] y(θ)f(θ)dθ ≥ˆ 1
0
[(vh − vl)− θ(v − vl)]x(θ)f(θ)dθ.
Let φ(θ) = J(θ) − 1, and ψ(θ) = (vh − vl) − θ(v − vl). We can apply Lemma 3 which
provides sufficient conditions for y(θ) = min{x(θ), x} to be optimal. Define η as a
solution to equation´ 1
θ?(φ(θ) + ηψ(θ))f(θ)dθ = 0. That is,
η =
´ 1
θ?(J(θ)− 1) f(θ)dθ´ 1
θ?(θ(v − vl)− (vh − vl)) f(θ)dθ
=(1− θ?)
(vh − vl)− (v − vl)E[θ| θ ≥ θ?]
By Lemma 3, it is enough to prove that Λ(θ) ≡ φ(θ)+ηψ(θ) crosses zero once and from
above. A sufficient condition is that Λ′(θ) ≤ 0 for all θ. This gives us the condition
η ≥ J
v − vl,
A Proofs 48
where recall that J = maxθ J′(θ). Let ∆ ≡ (vh−vl)/(v−vl). Plugging in the definition
of η, and simplifying, a sufficient condition for optimality of y(θ) = min{x(θ), x} is
∆− (1− θ?)J
≤ E[θ| θ ≥ θ?] ≤ ∆ (A.43)
By definition of x, the obedience constraint holds with equality at y, which means that
0 =
ˆ 1
θ?[∆− θ] (x(θ)− x)f(θ)dθ.
Because x(θ) is non-decreasing, the mean value theorem for integrals implies that
ˆ 1
θ?[∆− θ] f(θ)dθ ≥ 0.
And this condition implies that ∆ ≥ E[θ| θ ≥ θ?]. Thus, the sufficient condition (A.43)
boils down to
∆ ≤ E[θ| θ ≥ θ?] +1− θ?
J.
For uniform distribution, we have J = 2, and E[θ| θ ≥ θ?] = (1 + θ?)/2. Consequently,
∆ ≤ 1 + θ?
2+
1− θ?
2= 1,
and condition (A.43) always holds.
A.9 Proof of Claim 5
I denote y(θ) ≡ (1− q(θ))x(θ) and z(θ) ≡ q(θ)x(θ). Then the joint design problem can
be written as
maxy≥0, z≥0
vh
ˆ 1
0
z(θ)f(θ)dθ +
ˆ 1
0
[vh − (vh − vl)(1− J(θ))] y(θ)f(θ)dθ (A.44)
subject to
y(θ) is non-decreasing in θ, (A.45)
0 ≤ y(θ) + z(θ) ≤ 1, ∀θ, (A.46)ˆ 1
0
z(θ)φ(θ)f(θ)dt ≥ 0, (A.47)
A Proofs 49
where φ(θ) ≡ (vh− vl)− (v− vl)(1− θ). I will consider a relaxed problem where I omit
constraint (A.45), and verify at the end that this constraint holds.
After constraint (A.45) is dropped, we can apply standard optimal control tech-
niques to solve the problem (A.44) subject to (A.46) - (A.47) treating y(θ) and z(θ) as
control variables chosen at any θ from the set U = {(y, z) ∈ (0, 1)2 : y + z ≤ 1}. The
Hamiltonian is
H =
(vh + λφ(θ))︸ ︷︷ ︸λz(θ)
z(θ) + (vl + (vh − vl)J(θ))︸ ︷︷ ︸λy(θ)
y(θ)
f(θ),
Let θ? be the unique point such that φ(θ?) = 0, and let θ be defined as in the statement
of Claim 5. Then, we have θ < θ?, and there exists a unique λ > 0 such that λz(θ)
goes through the point (θ, 0). By assumption, λy(θ) ≤ 0 for all θ ≤ θ. Moreover, for
θ ≥ θ?, we have λz(θ) ≥ λy(θ), by direct inspection. Because λz(θ) is affine in θ, and
λy(θ) is convex in θ, this implies that λz(θ) ≥ λy(θ) for all θ ≥ θ. Therefore, y(θ) = 0
and z(θ) = 1{θ≥θ} maximizes the Hamiltonian point-wise subject to (y(θ), z(θ)) ∈ U .
Moreover, z(θ) satisfies constraint (A.47) with equality, by definition of θ. Because
y(θ) = 0 is non-decreasing in θ, constraint (A.45) also holds, and thus we have obtained
an optimal solution, by the Maximum Principle for optimal control.19
A.10 Proof of Theorem 5
The proof of Theorem 5 follows almost exactly the same steps as the proof of Theorem
2, so I omit most details, and only highlight the differences. I first assume that the
fixed allocation rule x is non-decreasing. I show later how to generalize the proof to
cover the cases of allocation rules that are not non-decreasing.
Assume that agent’s utility is strongly supermodular. That is, uh > 0, and ul = 0.
In this case, the assumptions of Theorem 5 imply that the third party’s and designer’s
utility functions are supermodular. Just like in the proof of Theorem 2, I define φ(θ) =
V h(θ)−V l(θ), and ψ(θ) = vh(θ)−vl(θ). This time, these functions are non-decreasing.
Thus, Lemma 1 from Appendix A.13 yields the conclusion that the optimal y(θ) =
q(θ)x(θ) takes the form y(θ) = x(θ)1{θ≥θ?} for some θ? ∈ [0, 1]. Therefore, the optimal
q(θ) corresponds to a partitional mechanism.
19 See for example Seierstad and Sydsaeter (1987).
A Proofs 50
The argument for the case when the agent’s utility is submodular is fully analogous
(see the proof of Theorem 2, modified as above).
Finally, I show how to solve for the optimal q in the case when x is not non-
decreasing. For concreteness, consider the case when agent’s utility is supermodular
(so that y(θ) = q(θ)x(θ)), and consider two of the constraints in the optimization
problem:
0 ≤ y(θ) ≤ x(θ), ∀θ ∈ Θ, (A.48)
y(θ) is non-decreasing in θ. (A.49)
Define, as in footnote 8, the lower monotone envelope of x, denoted x(θ),
x(θ) = sup{χ(θ) : χ(θ) ≤ x(θ), ∀θ, χ is non-decreasing}.
Constraints (A.48) - (A.49) are equivalent to
0 ≤ y(θ) ≤ x(θ), ∀θ ∈ Θ, (A.50)
y(θ) is non-decreasing in θ, (A.51)
where x is replaced by its lower monotone envelope. Once constraint (A.48) is re-
placed by (A.50), one can apply the same arguments as above (i.e. in the case of a
non-decreasing x) to show that a partitional mechanism is optimal.
A.11 Proof of Proposition 1
The problem of the designer is given by
maxx, q
ˆ 1
0
[q(θ)V h(θ) + (1− q(θ))V l(θ)
]x(θ)f(θ)dθ (A.52)
A Proofs 51
subject to
0 ≤ q(θ) ≤ 1, ∀θ ∈ Θ, (A.53)
q(θ)x(θ)uh + (1− q(θ))x(θ)ul is non-decreasing in θ, (A.54)ˆ 1
0
vh(θ)q(θ)x(θ)f(θ)dθ ≥ˆ 1
0
vl(θ)q(θ)x(θ)f(θ)dθ, (A.55)
ˆ 1
0
vl(θ)(1− q(θ))x(θ)f(θ)dθ ≥ˆ 1
0
vh(θ)(1− q(θ))x(θ)f(θ)dθ. (A.56)
The last constraint (A.56) can be ignored because it will always be slack at the optimal
solution (due to the assumption that V h(θ) ≥ V l(θ) for all θ).20 A solution exists
because the objective function is upper semi-continuous, and the set of feasible solutions
is compact.
I will solve the problem in three steps. In the first two steps, I solve two auxiliary
problems in which some choice variables are fixed. This allows me to derive restrictions
on the structure of the optimal solution. In the last step, I optimize in the class of
candidate solutions that satisfy these restrictions.
First, I consider an auxiliary problem for a fixed non-decreasing allocation rule x
(I will show later that a non-decreasing x is optimal). If a high price is an optimal
response for the third party given x when no further information is reveled, then it is
clearly optimal not to reveal any information. Thus, I can focus on the case when a
low price is quoted if no further information is revealed:
ˆ 1
0
(vh(θ)− vl(θ))x(θ)f(θ)dθ < 0.
Defining y(θ) = (1− q(θ))x(θ), the auxiliary problem is
miny
ˆ 1
0
(V h(θ)− V l(θ))y(θ)f(θ)dθ (A.57)
20 Formally, this can be verified by solving the relaxed problem and checking ex-post that this con-straint is satisfied.
A Proofs 52
subject to
0 ≤ y(θ) ≤ x(θ), ∀θ, (A.58)
(1− λ)x(θ) + λy(θ) is non-decreasing in θ, (A.59)ˆ 1
0
(vh(θ)− vl(θ))x(θ)f(θ)dθ ≥ˆ 1
0
(vh(θ)− vl(θ))y(θ)f(θ)dθ, (A.60)
Because both V h(θ)−V l(θ) and vh(θ)−vl(θ) are non-decreasing, by the usual argument,
the optimal y pushes mass as far as possible to the left. Formally, consider the following
candidate solution, for some α ∈ [0, 1]: yα(θ) = max{0, yα(θ)}, where
yα(θ) =
x(θ) if θ < α
x(α)− 1−λλ
(x(θ)− x(α)) if θ ≥ α.
That is, yα(θ) is first equal to x(θ), then it is such that constraint (A.59) holds with
equality, and then yα(θ) = 0. I claim that for any feasible y′, there exists α such that yα
achieves a weakly lower value of the objective function (A.57). Indeed, given y′, define
a function of the form yα(θ) such that
ˆ 1
0
yα(θ)f(θ)dθ =
ˆ 1
0
y′(θ)f(θ)dθ.
Given a feasible y′(θ), an α that gives rise to the above equality can always be found.21
Then, y′ first-order stochastically dominates yα, in the sense defined in previous proofs.
Therefore, yα is feasible, and achieves a weakly lower value of the objective function
(A.57). Therefore, an optimal y can always be found in the form of yα, for some
α ∈ [0, 1].
Note that when λ = 1, this is the same solution as appeared in the proof of Theorem
2. However, when λ < 1, the optimal function yα(θ) is first non-decreasing, and then
it might be strictly decreasing (if x(θ) is strictly increasing). When yα(θ) is decreasing,
it decreases at exactly the rate which makes the constraint (A.59) bind.
In the second step, I prove that the optimal x(θ) is non-decreasing, and derive
further necessary conditions on the structure of the solution. First, I change variables
in the problem (A.52) - (A.55). Let z(θ) = (1 − λ)x(θ) + λy(θ). Then, the problem
21 In particular, if y′(θ) satisfies condition (A.60), then the condition is preserved as mass is shiftedto the left, because the function vh(θ)− vl(θ) is non-decreasing.
A Proofs 53
becomes
maxx, z
ˆ 1
0
[λV h(θ) + (1− λ)(V h(θ)− V l(θ))
]x(θ)f(θ)dθ−
ˆ 1
0
[V h(θ)− V l(θ)
]z(θ)f(θ)dθ
(A.61)
subject to
z(θ) ≤ x(θ) ≤ max
{1,
1
1− λz(θ)
}, ∀θ, (A.62)
z(θ) is non-decreasing in θ, (A.63)ˆ 1
0
(vh(θ)− vl(θ))x(θ)f(θ)dθ ≥ˆ 1
0
(vh(θ)− vl(θ))z(θ)f(θ)dθ, (A.64)
I fix an arbitrary feasible z, and show that the optimal x must be non-decreasing.
By the usual argument (using the assumption that both V h(θ) and V h(θ)− V l(θ) are
non-decreasing), the optimal x(θ) pushes mass as far as possible to the right. That is,
the optimal x(θ) must be first equal to the lower bound z(θ), and then to the upper
bound max{
1, 11−λz(θ)
}. Because z(θ) is non-decreasing as well, it follows that x(θ)
is non-decreasing.
Moreover, using the definition of z, the optimal solution has the following structure:
for some 0 ≤ α ≤ β ≤ 1,x(θ) = y(θ) is non-decreasing θ < α,
y(θ) = 0 and x(θ) is non-decreasing α < θ < β,
x(θ) = 1 and y(θ) is non-decreasing β < θ.
Moreover, at the points α and β, monotonicity of z must be preserved. In particular,
because y drops down to zero at α, x has to jump up at α, except for cases when
α ∈ {0, 1}.Since we know that the optimal x(θ) is non-decreasing, we can now combine the
structural insights about the solution from both auxiliary problems considered above.
We know that y(θ) is non-decreasing and equal to x(θ) on [0, α], and then non-increasing
A Proofs 54
until it hits zero. Thus, we can refine the structure of the optimal solution:x(θ) = y(θ) is non-decreasing θ < α,
y(θ) = 0 and x(θ) is non-decreasing α < θ < β,
y(θ) = 0, x(θ) = 1 β < θ.
Given the above structure of the optimal solution, the optimization problem can be,
without loss of generality, formulated as
maxx, α
ˆ α
0
V l(θ)x(θ)f(θ)dθ +
ˆ 1
α
V h(θ)x(θ)f(θ)dθ (A.65)
subject to
x(θ) is non-decreasing on [0, α) ∪ (α, 1] (A.66)
x(α+) ≥ 1
1− λx(α−), (A.67)
ˆ 1
α
(vh(θ)− vl(θ))x(θ)f(θ)dθ ≥ 0. (A.68)
In the above problem, x(α+) and x(α−) denote the right and the left limits of x(θ)
at α, respectively, where (by convention) x(0−) = 0 and x(1+) = 1. If α = 0 and
x(θ) = 1 satisfy constraint (A.68), then this is clearly the optimal solution (given that
V h(θ) ≥ V l(θ)). However, such a solution is precluded by assumption (7.1).
Recall that γl and γh denote the points where V l and V h cross zero, respectively. I
consider three candidate solutions, depending on whether (i) α = 0, (ii) α = 1, or (iii)
α ∈ (0, 1) in the optimal solution. In the last step of the proof, I derive conditions for
each of these three candidate solutions to be optimal.
Cases (i) and (ii) are relatively straightforward. In case (i), because V h is non-
decreasing, the optimal solution is x(θ) = 1{θ≥γ} for some γ. Let α? denote the solution
to the equation ˆ 1
α
(vh(θ)− vl(θ))f(θ)dθ ≥ 0.
Then, γ = max{α?, γh}. Assumption (7.1) implies that α? ≥ γh, so that γ = α?.
The optimal y is equal to 0 everywhere in this case. I will refer to this mechanism as
mechanism 0.
In case (ii), it is optimal to set x(θ) = y(θ) = 1{θ≥γl}. This is mechanism 1 from
A Proofs 55
Proposition 1.
Finally, I consider case (iii). By considering the auxiliary problem in which x(α)
is fixed, the problem can be decomposed into two independent parts, on [0, α], and on
[α, 1]. On [0, α], the optimal x must take the form x(θ) = x(α)1{θ≥γl}. On [α, 1], the
optimal x takes the form x(θ) = 1{θ≥max{γh, α?}}. If γl ≥ α, then we conclude that the
optimal solution must take the form from case (i). Therefore, because γh ≤ γl, we must
have γl ≤ α ≤ α?, and x(θ) = 1{θ≥α?} on [α, 1]. However, it is not possible in case (iii)
that x drops at α, and thus we must have α ≥ α?. Finally, inequality (A.59) must hold
with equality at the optimal solution in case (iii). Summarizing, we obtained,
x(θ) =
0 θ < γl,
1− λ γl ≤ θ < α?,
1 α? ≤ θ.
The corresponding optimal y is equal to x on [0, α?], and equal to 1− λ on [α?, 1], so
that y(θ) = (1− λ)1{γl≤θ≤α?}. This is mechanism 2 from Proposition 1.
Summing up, the optimal solution is one of three candidate solutions derived above.
One can directly compare the expected payoffs from these three mechanisms to find the
optimal one. The expected payoffs in cases (i) - (iii) are, respectively,
ˆ 1
α?V h(θ)f(θ)dθ, (A.69)
ˆ 1
γl
V l(θ)f(θ)dθ, (A.70)
(1− λ)
ˆ α?
γl
V l(θ)f(θ)dθ +
ˆ 1
α?V h(θ)f(θ)dθ. (A.71)
Mechanism 0 (corresponding to the expected payoff (A.69)) is never strictly optimal
because mechanism 2 always yields a weakly higher expected payoff. Therefore, either
mechanism 1 or mechanism 2 is optimal.
A.12 Proof of Propositions 3a, 3b and 3c
The goal is calculate (the inverse of)
r?(λ) ≡ supexp. payoff of the optimal mechanism
exp. payoff of the optimal cutoff mechanism,
A Proofs 56
where the supremum is taken over all distributions f , and all 0 ≤ vl ≤ vh ≤ v, for a
fixed probability λ of the aftermarket. Using Propositions 1 and 2, I can write
r?(λ) ≤ sup(1− λ)
´ α?γlV l(θ)f(θ)dθ +
´ 1
α?V h(θ)f(θ)dθ
max{´ 1
γlV l(θ)f(θ)dθ,
´ 1
α?V h(θ)f(θ)dθ}
.
Proof of Proposition 3a: For the problem of maximizing efficiency, we have γl = 0.
It is easy to see that in the worst-case scenario, the two candidate optimal cutoff
mechanisms must yield exactly the same surplus, that is,
ˆ 1
0
V l(θ)f(θ)dθ =
ˆ 1
α?V h(θ)f(θ)dθ. (A.72)
This allows me to formulate the problem as
supvl, vh, v, f
(1− λ)´ α?
0V l(θ)f(θ)dθ +
´ 1
α?V h(θ)f(θ)dθ´ 1
α?V h(θ)f(θ)dθ
, (A.73)
subject to (A.72).
I can normalize one of parameters vl, vh, v because the numerator and the denomi-
nator can be divided by a constant without changing the value of the ratio. I choose a
normalization such that v = 1 (then, 0 ≤ vl ≤ vh ≤ 1).
Using the form of the objective function that arises under total surplus maximiza-
tion, and in particular linearity in θ, I can write
ˆ 1
0
V l(θ)f(θ)dθ = V l(Ef [θ]), (A.74)
ˆ 1
α?V h(θ)f(θ)dθ = (1− F (α?))V h(Ef [θ|θ ≥ α?]), (A.75)
ˆ α?
0
V l(θ)f(θ)dθ = F (α?)V l(Ef [θ|θ ≤ α?]). (A.76)
Therefore, the dependence of the ratio on f is only through three parameters: β ≡F (α?), θα
? ≡ Ef [θ|θ ≥ α?]), and θα? ≡ Ef [θ|θ ≤ α?]). In particular, Ef [θ] = βθα? +
(1− β)θα?. Moreover, by the definition of α? (see equation 7.2), we have
(v − vh)(1− F (α?)) =
ˆ 1
α?(v − vl)(1− θ)f(θ)dθ,
A Proofs 57
so that
θα?
=vh − vlv − vl
.
The distribution parameters are only constrained by θα? ≤ θα?, and in particular the
ratio no longer depends explicitly on α?.
The next step is to solve for θα? using equality (A.72). Because (A.72) is linear in
θα? , we get a unique solution that we can plug back into (A.73) to obtain
r?(λ) ≤ supvl, vh
(3vhλ− 3λ− 2vh − vhλ2 + λ2 + 1)vl + v2hλ
2 − 2v2hλ+ v2
h − vhλ2 + vhλ+ λ
(2vhλ− 2λ− 2vh + 1)vl + λ− v2hλ+ v2
h
.
The derivative of the above expression with respect to vl is
λ(vh − 1)2(λ− 1)(vh + λ− vhλ)
(vl + λ− 2vhvl − 2vlλ− v2hλ+ v2
h + 2vhvlλ)2≤ 0,
because λ ≤ 1, vh + λ ≥ vhλ, and the denominator is non-negative. Therefore, it is
optimal to set vl = 0. Plugging this above, we can conclude that
r?(λ) ≤ supvh
v2hλ
2 − 2v2hλ+ v2
h − vhλ2 + vhλ+ λ
λ− v2hλ+ v2
h
.
Since an optimal mechanism is trivially a cutoff mechanism when either vh = vl or
vh = v, there must be an interior solution for vh in the above problem. From the
first-order condition, we obtain the following condition for the optimal vh:
2vh + λ− 4vhλ+ 2vhλ2 − λ2
λ− v2hλ+ v2
h
=(2vh − 2vhλ)(v2
hλ2 − 2v2
hλ+ v2h − vhλ2 + vhλ+ λ)
(λ− v2hλ+ v2
h)2
Simplifying,
λ− 2hλ+ v2hλ− v2
h = 0.
The above quadratic equation has only one solution in the feasible region, and it is
equal to
vh =
√λ√
λ+ 1.
Plugging the solution back into the ratio, we obtain,
r?(λ) ≤ 1 +1
2(√λ− λ).
A Proofs 58
The inverse of this expression is the ratio in the statement of Proposition 3a. Finally,
notice that the gap is largest for λ that maximizes√λ − λ, that is, for λ = 1/4. We
have r?(1/4) = 9/8.
Proof of Proposition 3c: It is convenient to prove Proposition 3c before Proposition
3b. The proof is immediate:
r?(λ) ≤(1− λ)
´ α?γlV l(θ)f(θ)dθ +
´ 1
α?V h(θ)f(θ)dθ
max{´ 1
γlV l(θ)f(θ)dθ,
´ 1
α?V h(θ)f(θ)dθ}
≤ supa≥0, b≥0
(1− λ)a+ b
max{a, b}= 2− λ.
Proof of Proposition 3b: For the case of revenue maximization, we have V l(θ) = vl+
(vh−vl)J(θ), where J(θ) is the virtual surplus function (non-decreasing by assumption),
and V h(θ) = (1− λ)V l(θ) + λvh. Thus,
ˆ 1
α?V l(θ)f(θ)dθ = (1− F (α?))(vl + (vh − vl)α?),
and we can write
ˆ 1
α?V h(θ)f(θ)dθ =
[(1− λ) +
λvhvl + (vh − vl)α?
]ˆ 1
α?V l(θ)f(θ)dθ.
Therefore,
r?(λ) ≤(1− λ)
´ α?γlV l(θ)f(θ)dθ +
´ 1
α?V h(θ)f(θ)dθ
max{´ 1
γlV l(θ)f(θ)dθ,
´ 1
α?V h(θ)f(θ)dθ}
≤(1− λ)
´ α?γlV l(θ)f(θ)dθ +
[(1− λ) + λvh
vl+(vh−vl)α?
] ´ 1
α?V l(θ)f(θ)dθ´ 1
γlV l(θ)f(θ)dθ
≤ 1− λ+λvhvl
= 1− λ(1− r). (A.77)
Combining the bound established above with the bound from the proof of Proposi-
tion 3c, we obtain that for revenue maximization:
r?(λ) ≤ min{2− λ, 1− λ(1− r)}.
Taking the inverse of the above expression yields the conclusion of Proposition 3b.
A Proofs 59
A.13 Technical Appendix
In this appendix, I consider the maximization problem
maxy: Θ→[0, 1]
ˆ 1
0
φ(θ)y(θ)f(θ)dθ (A.78)
subject to
y(θ) is non-decreasing, (A.79)
y(θ) ≤ x(θ), ∀θ ∈ Θ, (A.80)ˆ 1
0
ψ(θ)y(θ)f(θ)dθ ≥ c, (A.81)
for some upper semi-continuous functions φ : Θ → R, ψ : Θ → R, a non-decreasing
upper semi-continuous allocation rule x, and a constant c ∈ R. The function φ is either
non-negative or non-positive.
To avoid trivial cases, I make the following assumptions. There exists at least one
feasible y. In the case when φ is non-negative, the upper bound is given by y(θ) = x(θ),
and ˆ 1
0
ψ(θ)x(θ)f(θ)dθ < c.
In the case when φ is non-positive, the upper bound is given by y(θ) = 0, and I assume
that c > 0.
Lemma 1. Consider the maximization problem (A.78) - (A.81).
If φ and ψ are both non-increasing, the optimal solution takes the form y(θ) =
min{x(θ), x} for x ∈ [0, 1] such that (A.81) holds with equality.
If φ and ψ are both non-decreasing, the optimal solution takes the form y(θ) =
x(θ)1{θ≥θ?} for θ? ∈ [0, 1] such that (A.81) holds with equality.
Proof of Lemma 1. Consider two candidate solutions y1 and y2. I say that y1 dominates
y2 if (i)´ 1
0y1(θ)f(θ)dθ =
´ 1
0y1(θ)f(θ)dθ, and (ii)
´ α0y1(θ)f(θ)dθ ≤
´ α0y1(θ)f(θ)dθ for
all α ∈ [0, 1].
Take an optimal solution y? to problem (A.78) - (A.81) (the solution always exists).
Let α ≡´ 1
0y?(θ)f(θ)dθ. Then, y? solves the problem (A.78) - (A.81) with the additional
constraint
α =
ˆ 1
0
y(θ)f(θ)dθ. (A.82)
A Proofs 60
Suppose that φ and ψ are non-decreasing. Consider y′(θ) which satisfies (A.79), (A.80),
(A.82), and dominates y?. Then, Y ′(θ) =´ θ
0y′(τ)f(τ)dτ first-order stochastically dom-
inates Y ?(θ) =´ θ
0y?(τ)f(τ)dτ . It follows that y′ satisfies (A.81) and achieves a higher
value of the objective (A.78) than y?, so y′ is optimal.
Thus, an optimal solution can be found among functions y that satisfy (A.79),
(A.80), and are not dominated by any other function satisfying these constraints with
the same value of´ 1
0y(θ)f(θ)dθ. Because functions of the form y(θ) = x(θ)1{θ≥θ?}
dominate all functions satisfying (A.79) and (A.80) with equal´ 1
0y(θ)f(θ)dθ, y(θ) =
x(θ)1{θ≥θ?} is optimal for some θ? ∈ [0, 1].
I only have to prove that θ? is set in such a way that condition (A.81) holds with
equality. By assumption, there exists a feasible solution y′. Some function of the
form y(θ) = x(θ)1{θ≥θ?} dominates y′, so there exists a feasible solution of the form
y(θ) = x(θ)1{θ≥θ?}. Again by assumption, whether φ is non-negative or non-positive,
the upper-bound solutions y ≡ x and y ≡ 0 are not feasible, respectively. Thus, (A.81)
has to hold with equality.
In the case when φ and ψ are non-increasing, consider y′ satisfying (A.79), (A.80),
(A.82), which is dominated by y?. By a similar reasoning, y′ is optimal. Thus, an
optimal solution is a function y that is dominated by all functions satisfying (A.79)
and (A.80). Such a function takes the form y(θ) = min{x(θ), x} for some constant
x ∈ [0, 1]. By the same reasoning as in the previous case, x has to be such that
constraint (A.81) holds with equality.
For the remaining results in this appendix, I will use tools from optimal control
theory. The following lemma establishes sufficient conditions for a feasible candidate
solution y to be optimal for the problem (A.78)-(A.81).
Lemma 2. Suppose that y is an absolutely continuous function which satisfies con-
straints (A.79) - (A.81). Suppose that there exist (i) a function u(θ) ≥ 0 such that
y(θ) − y(0) =´ θ
0u(τ)dτ , for all θ ∈ [0, 1], (ii) a piece-wise continuous λ(θ) ≥ 0, (iii)
a piece-wise differentiable and piece-wise continuous p(θ) ≤ 0, and (iv) a constant η,
such that:
1. u(θ) > 0 =⇒ p(θ) = 0,
2. x(θ) > y(θ) =⇒ λ(θ) = 0,
3. p(θ) = (λ(θ)− φ(θ)− ηψ(θ)) f(θ), whenever it is differentiable,
A Proofs 61
4. p can jump up at finitely many points τ , p(τ−) < p(τ+), provided that x(τ) = y(τ).
5. p(0) = 0 and p(1) = 0,
6. if constraint (A.81) does not bind, then η = 0; if constraint (A.81) is assumed to
bind, then η is unrestricted; in other cases, η ≥ 0.
Then, y solves the problem (A.78) - (A.81).
Proof. Follows by applying Theorem 1 on page 317 of Seierstad and Sydsaeter (1987)
to the problem (A.78) - (A.81). The condition p(θ) ≤ 0 and condition (1) come from
the requirement that the non-negative control function u(θ) maximizes the Hamiltonian
(which is linear in the control function). Condition (2) is a complementary-slackness
condition on the Lagrange multiplier λ(θ) that comes from the constraint y(θ) ≤ x(θ).
Condition (3) is the standard law of motion for the multiplier p, and condition (4) fol-
lows from the fact that the endpoints of y are not restricted. Sometimes p might have
a jump discontinuity at 1, which is captured by condition (5). The constraint (A.81)
is incorporated by defining an auxiliary state variable Γ(θ) with Γ(θ) = ψ(θ)y(θ)f(θ),
Γ(0) = 0, and Γ(1) ≥ c. Because Γ does not appear in the Hamiltonian, the correspond-
ing multiplier η is constant. Condition (6) summarizes the properties of η depending
on the whether Γ(1) ≥ c is binding or not. The concavity assumptions are satisfied
because the problem and the constraints are linear in y.
I now apply Lemma 2 to obtain sufficient conditions for a cutoff mechanism to solve
the problem (A.78) - (A.81).
Lemma 3. Consider problem (A.78)-(A.81) and assume additionally that x is abso-
lutely continuous. Suppose that x is well-defined by
ˆ 1
0
ψ(θ) min{x(θ), x}f(θ)dθ = c.
Define θ? = max{θ ∈ [0, 1] : x(θ) ≤ x}, and η by
ˆ 1
θ?(φ(θ) + ηψ(θ))f(θ)dθ = 0.
If φ(θ) + ηψ(θ) crosses zero once and from above, then y(θ) = min{x(θ), x} solves the
problem.
A Proofs 62
Proof. By assumption, y(θ) = min{x(θ), x} is a feasible candidate solution. To prove
that y is optimal, let λ(θ) = φ(θ) + ηψ(θ) for θ ≤ θ?, and λ(θ) = 0 for θ > θ?. Next,
define p as in Lemma 2 with p(0) = 0. By the choice of λ, p(θ) = 0 for θ ≤ θ?. To
guarantee that p(1) = 0, we need
ˆ 1
θ?(φ(θ) + ηψ(θ))f(θ)dθ = 0,
and this pins down η. Suppose that φ(θ) + ηψ(θ) crosses zero once and from above.
The function φ(θ) + ηψ(θ) has to cross zero to the right of θ? because otherwise the
above equality could not hold. Thus, φ(θ) + ηψ(θ) is positive for θ ≤ θ?. This means
that λ(θ) ≥ 0 for all θ. Finally, p(θ) is non-positive because its derivative on [θ?, 1],
equal to −(φ(θ) + ηψ(θ)), is first non-positive, and then non-negative.