Mechanism design, sometimes called implementation theory or institution design,[1] is a branch of economics, social choice, and game theory that deals with designing game forms (or mechanisms) to implement a given social choice function. Because it starts with the end of the game (an optimal result) and then works backwards to find a game that implements it, it is sometimes described as reverse game theory.[2]
Mechanism design has broad applications, including traditional domains of economics such as market design, but also political science (through voting theory) and even networked systems (such as in inter-domain routing).
Mechanism design studies solution concepts for a class of private-information games. Leonid Hurwicz explains that "in a design problem, the goal function is the main given, while the mechanism is the unknown. Therefore, the design problem is the inverse of traditional economic theory, which is typically devoted to the analysis of the performance of a given mechanism."[3]
The 2007 Nobel Memorial Prize in Economic Sciences was awarded to Leonid Hurwicz, Eric Maskin, and Roger Myerson "for having laid the foundations of mechanism design theory."[4] The related works of William Vickrey that established the field earned him the 1996 Nobel prize.
One person, called the "principal", would like to condition his behavior on information privately known to the players of a game. For example, the principal would like to know the true quality of a used car a salesman is pitching. He cannot learn anything simply by asking the salesman, because it is in the salesman's interest to distort the truth. However, in mechanism design, the principal does have one advantage: He may design a game whose rules influence others to act the way he would like.
Without mechanism design theory, the principal's problem would be difficult to solve. He would have to consider all the possible games and choose the one that best influences other players' tactics. In addition, the principal would have to draw conclusions from agents who may lie to him. Thanks to the revelation principle, the principal only needs to consider games in which agents truthfully report their private information.
A game of mechanism design is a game of private information in which one of the agents, called the principal, chooses the payoff structure. Following, the agents receive secret "messages" from nature containing information relevant to payoffs. For example, a message may contain information about their preferences or the quality of a good for sale. We call this information the agent's "type" (usually noted
\theta
\Theta
\hat\theta
The timing of the game is:
y
y
\hat\theta
y(\hat\theta)
In order to understand who gets what, it is common to divide the outcome
y
y(\theta)=\{x(\theta),t(\theta)\}, x\inX,t\inT
x
t
f(\theta)
f(\theta):\Theta → Y
In contrast a mechanism maps the reported type profile to an outcome (again, both a goods allocation
x
t
y(\hat\theta):\Theta → Y
See main article: Revelation principle.
A proposed mechanism constitutes a Bayesian game (a game of private information), and if it is well-behaved the game has a Bayesian Nash equilibrium. At equilibrium agents choose their reports strategically as a function of type
\hat\theta(\theta)
It is difficult to solve for Bayesian equilibria in such a setting because it involves solving for agents' best-response strategies and for the best inference from a possible strategic lie. Thanks to a sweeping result called the revelation principle, no matter the mechanism a designer can[5] confine attention to equilibria in which agents truthfully report type. The revelation principle states: "To every Bayesian Nash equilibrium there corresponds a Bayesian game with the same equilibrium outcome but in which players truthfully report type."
This is extremely useful. The principle allows one to solve for a Bayesian equilibrium by assuming all players truthfully report type (subject to an incentive compatibility constraint). In one blow it eliminates the need to consider either strategic behavior or lying.
Its proof is quite direct. Assume a Bayesian game in which the agent's strategy and payoff are functions of its type and what others do,
ui\left(si(\thetai),s-i(\theta-i),\thetai\right)
s(\thetai)
si(\thetai)\in
\argmax | |
s'i\inSi |
\sum | |
\theta-i |
p(\theta-i\mid\thetai) ui\left(s'i,s-i(\theta-i),\thetai\right)
Simply define a mechanism that would induce agents to choose the same equilibrium. The easiest one to define is for the mechanism to commit to playing the agents' equilibrium strategies for them.
y(\hat\theta):\Theta → S(\Theta) → Y
Under such a mechanism the agents of course find it optimal to reveal type since the mechanism plays the strategies they found optimal anyway. Formally, choose
y(\theta)
\begin{align} \thetai\in{}&
\argmax | |
\theta'i\in\Theta |
\sum | |
\theta-i |
p(\theta-i\mid\thetai) ui\left(y(\theta'i,\theta-i),\thetai\right)\\[5pt] &=
\sum | |
\theta-i |
p(\theta-i\mid\thetai) ui\left(si(\theta),s-i(\theta-i),\thetai\right) \end{align}
The designer of a mechanism generally hopes either
y
y
To implement a social choice function
f(\theta)
t(\theta)
f(\theta)
f(\theta)=x\left(\hat\theta(\theta)\right)
Thanks to the revelation principle, the designer can usually find a transfer function
t(\theta)
\hat\theta(\theta)=\theta
t(\theta)
x(\theta)
t(\theta)
u(x(\theta),t(\theta),\theta)\gequ(x(\hat\theta),t(\hat\theta),\theta) \forall\theta,\hat\theta\in\Theta
In applications, the IC condition is the key to describing the shape of
t(\theta)
Consider a setting in which all agents have a type-contingent utility function
u(x,t,\theta)
x(\theta)
k
k
The function
x(\theta)
n | |
\sum | |
k=1 |
\partial | |
\partial\theta |
\left(
\partialu/\partialxk | |
\left|\partialu/\partialt\right| |
\right)
\partialx | |
\partial\theta |
\geq0
x=x(\theta)
t=t(\theta)
\theta
Its meaning can be understood in two pieces. The first piece says the agent's marginal rate of substitution (MRS) increases as a function of the type,
\partial | |
\partial\theta |
\left(
\partialu/\partialxk | |
\left|\partialu/\partialt\right| |
\right)=
\partial | |
\partial\theta |
MRSx,t
\partialx | |
\partial\theta |
There is potential for the two pieces to interact. If for some type range the contract offered less quantity to higher types
\partialx/\partial\theta<0
Mechanism design papers usually make two assumptions to ensure implementability:
\partial | |
\partial\theta |
\partialu/\partialxk | |
\left|\partialu/\partialt\right| |
>0 \forallk
This is known by several names: the single-crossing condition, the sorting condition and the Spence–Mirrlees condition. It means the utility function is of such a shape that the agent's MRS is increasing in type.
\existsK0,K1suchthat\left|
\partialu/\partialxk | |
\partialu/\partialt |
\right|\leqK0+K1|t|
This is a technical condition bounding the rate of growth of the MRS.
These assumptions are sufficient to provide that any monotonic
x(\theta)
t(\theta)
x(\theta)
x(\theta)
See main article: Revenue equivalence.
gives a celebrated result that any member of a large class of auctions assures the seller of the same expected revenue and that the expected revenue is the best the seller can do. This is the case if
The last condition is crucial to the theorem. An implication is that for the seller to achieve higher revenue he must take a chance on giving the item to an agent with a lower valuation. Usually this means he must risk not selling the item at all.
See main article: Vickrey–Clarke–Groves mechanism.
The Vickrey (1961) auction model was later expanded by and Groves to treat a public choice problem in which a public project's cost is borne by all agents, e.g. whether to build a municipal bridge. The resulting "Vickrey–Clarke–Groves" mechanism can motivate agents to choose the socially efficient allocation of the public good even if agents have privately known valuations. In other words, it can solve the "tragedy of the commons"—under certain conditions, in particular quasilinear utility or if budget balance is not required.
Consider a setting in which
I
v(x,t,\theta)
t
* | |
x | |
I(\theta) |
\in\underset{x\inX}{\operatorname{argmax}}\sumiv(x,\thetai)
The cleverness of the VCG mechanism is the way it motivates truthful revelation. It eliminates incentives to misreport by penalizing any agent by the cost of the distortion he causes. Among the reports the agent may make, the VCG mechanism permits a "null" report saying he is indifferent to the public good and cares only about the money transfer. This effectively removes the agent from the game. If an agent does choose to report a type, the VCG mechanism charges the agent a fee if his report is pivotal, that is if his report changes the optimal allocation x so as to harm other agents. The payment is calculated
ti(\hat\theta)=\sumj
* | |
v | |
I-i |
(\thetaI-i),\thetaj)-\sumj
* | |
v | |
I |
(\hat\thetai,\thetaI),\thetaj)
See main article: Gibbard–Satterthwaite theorem.
and give an impossibility result similar in spirit to Arrow's impossibility theorem. For a very general class of games, only "dictatorial" social choice functions can be implemented.
A social choice function f is dictatorial if one agent always receives his most-favored goods allocation,
forf(\Theta),\existsi\inIsuchthatui(x,\thetai)\gequi(x',\thetai) \forallx'\inX
The theorem states that under general conditions any truthfully implementable social choice function must be dictatorial if,
f(\Theta)=X
See main article: Myerson–Satterthwaite theorem.
show there is no efficient way for two parties to trade a good when they each have secret and probabilistically varying valuations for it, without the risk of forcing one party to trade at a loss. It is among the most remarkable negative results in economics—a kind of negative mirror to the fundamental theorems of welfare economics.
See main article: Shapley value.
Phillips and Marden (2018) proved that for cost-sharing games with concave cost functions, the optimal cost-sharing rule that firstly optimizes the worst-case inefficiencies in a game (the price of anarchy), and then secondly optimizes the best-case outcomes (the price of stability), is precisely the Shapley value cost-sharing rule.[6] A symmetrical statement is similarly valid for utility-sharing games with convex utility functions.
introduces a setting in which the transfer function t is easy to solve for. Due to its relevance and tractability it is a common setting in the literature. Consider a single-good, single-agent setting in which the agent has quasilinear utility with an unknown type parameter
\theta
u(x,t,\theta)=V(x,\theta)-t
P(\theta)
maxx(\theta),t(\theta)E\theta\left[t(\theta)-c\left(x(\theta)\right)\right]
u(x(\theta),t(\theta),\theta)\gequ(x(\theta'),t(\theta'),\theta) \forall\theta,\theta'
u(x(\theta),t(\theta),\theta)\geq\underline{u}(\theta) \forall\theta
A trick given by Mirrlees (1971) is to use the envelope theorem to eliminate the transfer function from the expectation to be maximized,
letU(\theta)=max\theta'u\left(x(\theta'),t(\theta'),\theta\right)
dU | |
d\theta |
=
\partialu | |
\partial\theta |
=
\partialV | |
\partial\theta |
U(\theta)=\underline{u}(\theta0)+
\theta | |
\int | |
\theta0 |
\partialV | |
\partial\tilde\theta |
d\tilde\theta
\theta0
t(\theta)=V(x(\theta),\theta)-U(\theta)
\begin{align}&E\theta\left[V(x(\theta),\theta)-\underline{u}(\theta0)-
\theta | |
\int | |
\theta0 |
\partialV | |
\partial\tilde\theta |
d\tilde\theta-c\left(x(\theta)\right)\right]\\ &{}=E\theta\left[V(x(\theta),\theta)-\underline{u}(\theta0)-
1-P(\theta) | |
p(\theta) |
\partialV | |
\partial\theta |
-c\left(x(\theta)\right)\right]\end{align}
Because
U(\theta)
x(\theta)
In some applications the designer may solve the first-order conditions for the price and allocation schedules yet find they are not monotonic. For example, in the quasilinear setting this often happens when the hazard ratio is itself not monotone. By the Spence–Mirrlees condition the optimal price and allocation schedules must be monotonic, so the designer must eliminate any interval over which the schedule changes direction by flattening it.
Intuitively, what is going on is the designer finds it optimal to bunch certain types together and give them the same contract. Normally the designer motivates higher types to distinguish themselves by giving them a better deal. If there are insufficiently few higher types on the margin the designer does not find it worthwhile to grant lower types a concession (called their information rent) in order to charge higher types a type-specific contract.
Consider a monopolist principal selling to agents with quasilinear utility, the example above. Suppose the allocation schedule
x(\theta)
\theta1
\theta2>\theta1
x
\phi1(x)
\theta\leq\theta1
\phi2(x)
\theta\geq\theta2
\phi1
\theta
\phi2
\theta
x(\theta)
\phi(x)
The proof uses the theory of optimal control. It considers the set of intervals
\left[\underline\theta,\overline\theta\right]
x(\theta)
x(\theta)
Condition two ensures that the
x(\theta)
x(\theta)
As before maximize the principal's expected payoff, but this time subject to the monotonicity constraint
\partialx | |
\partial\theta |
\geq0
\nu(\theta)
H=\left(V(x,\theta)-\underline{u}(\theta0)-
1-P(\theta) | |
p(\theta) |
\partialV | |
\partial\theta |
(x,\theta)-c(x)\right)p(\theta)+\nu(\theta)
\partialx | |
\partial\theta |
x
\partialx/\partial\theta
\partial\nu | |
\partial\theta |
=-
\partialH | |
\partialx |
=-\left(
\partialV | |
\partialx |
(x,\theta)-
1-P(\theta) | |
p(\theta) |
\partial2V | |
\partial\theta\partialx |
(x,\theta)-
\partialc | |
\partialx |
(x)\right)p(\theta)
\theta
\nu(\underline\theta)=\nu(\overline\theta)=0
\overline\theta | |
\int | |
\underline\theta |
\left(
\partialV | |
\partialx |
(x,\theta)-
1-P(\theta) | |
p(\theta) |
\partial2V | |
\partial\theta\partialx |
(x,\theta)-
\partialc | |
\partialx |
(x)\right)p(\theta)d\theta=0
x
\theta