diff options
Diffstat (limited to 'ps2/main.tex')
| -rw-r--r-- | ps2/main.tex | 45 |
1 files changed, 38 insertions, 7 deletions
diff --git a/ps2/main.tex b/ps2/main.tex index 17f7693..9f547d6 100644 --- a/ps2/main.tex +++ b/ps2/main.tex @@ -34,6 +34,7 @@ \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} +\newtheorem*{claim}{Claim} \author{Thibaut Horel \and Paul Tylkin} \title{Economics 2099 Problem Set 2 -- Solutions} @@ -46,6 +47,36 @@ \section*{Exercise 4.14*} \section*{Exercise 4.19} +\begin{claim} In regular, matroid environments, the revenue of the surplus maximization mechanism with monopoly reserves is a 2-approximation to the optimal mechanism revenue. + +\begin{proof} Let REF denote the optimal mechanism and its expected revenue and let APX denote the surplus maximization mechanism with monopoly reserves and its expected revenue. Similarly to the single-item case, we know that REF $\geq$ APX and want to show that REF $\leq$ 2APX. Let $I$ denote the set of agents allocated by the optimal mechanism and let $J$ denote the set of agents allocated by the surplus maximization mechanism with monopoly reserves. Then, $$REF = \E\left[\sum_{i \in I}\phi_i(v_i)\right]$$ and $$APX = \E\left[\sum_{j \in J}\phi_j(v_j)\right].$$ By linearity of expectation, we can write this as $$REF = \sum_{i \in I}\E\left[\phi_i(v_i)\right]$$ and $$APX = \sum_{j \in J}\E\left[\phi_j(v_j)\right].$$ Now, we can write the expected revenue of the optimal mechanism by conditioning on whether $i = j$ for $i \in I$, $j \in J$. (i.e. whether agent $i$ is allocated the same item by the optimal mechanism as agent $j$ is by the surplus maximization mechanism; an alternate way of phrasing this is that the map $I \rightarrow J$ is an injection). So we can write $$REF = \underbrace{\sum_{\substack{i \in I \\ j \in J}}\E[\phi_i(v_i) | i = j] \Pr[i=j]}_{REF_=} + \underbrace{\sum_{\substack{i \in I \\ j \in J}}\E[\phi_i(v_i) | i \neq j] \Pr[i\neq j]}_{REF_{\neq}}.$$ We will prove the desired result by showing both $REF_=$ and $REF_{\neq}$ are bounded from above by APX. + +\begin{align} +REF_= &= \sum_{\substack{i \in I \\ j \in J}}\E[\phi_i(v_i) | i = j] \Pr[i=j] \\ +&= \sum_{\substack{i \in I \\ j \in J}}\E[\phi_j(v_j) | i = j] \Pr[i=j] \\ +&\leq \sum_{\substack{i \in I \\ j \in J}}\E[\phi_j(v_j) | i = j] \Pr[i=j] + \sum_{\substack{i \in I \\ j \in J}}\E[\phi_j(v_j) | i \neq j] \Pr[i\neq j] \label{42}\\ +&= \sum_{j \in J}\E\left[\phi_j(v_j)\right] \\ +&= APX. +\end{align} + +Note that Equation $\ref{42}$ follows from Lemma 4.2 (i.e. from the monotonicity of virtual value functions). + +Next, we consider $REF_{\neq}$. + +\begin{align} +REF_{\neq} &= \sum_{\substack{i \in I \\ j \in J}}\E[\phi_i(v_i) | i \neq j] \Pr[i\neq j] \\ +&\leq \sum_{\substack{i \in I \\ j \in J}}\E[v_i | i \neq j] \Pr[i\neq j] \\ +&\leq \sum_{\substack{i \in I \\ j \in J}}\E[\hat{v}_i | i \neq j] \Pr[i\neq j] \\ +&= \sum_{\substack{i \in I \\ j \in J}}\E[v_j | i \neq j] \Pr[i\neq j] \label{428} \\ +&\leq \sum_{\substack{i \in I \\ j \in J}}\E[v_j | i = j] \Pr[i=j] + \sum_{\substack{i \in I \\ j \in J}}\E[v_j | i \neq j] \Pr[i\neq j] \label{429} \\ +&= APX. +\end{align} + +Note that Equation $\ref{428}$ follows from Lemma 4.28 and Equation $\ref{429}$ follows from Theorem 4.29. + +\end{proof} + +\end{claim} \section*{Additional Problem} @@ -83,7 +114,7 @@ distribution over subsets of $A$: \end{cases} \end{displaymath} and similarly for $\Pr^A_{\mathbf{F^I}}$. That is, the only subsets of $A$ which -have a non-zero probability are the ones which ``correspond'' to a $n$-tuple in +have a non-zero probability are the ones which ``correspond'' to an $n$-tuple in $A_1\times\ldots\times A_n$ (they are in the image of the canonical injection given above). @@ -115,13 +146,13 @@ simply $a$. Hence we can apply Lemma 4.13 to $g$ for subsets of $A$ and obtain: \paragraph{General distributions} We first look at a continuous distribution $\mathbf{F}$ with bounded support. On the support of $\mathbf{F}$, it is -possible to approximate its CDF arbitrarily well in $L^\infty$ norm (hence in -$L^2$ norm since we are on a bounded set) by simple functions (convex +possible to approximate its CDF arbitrarily well in the $L^\infty$ norm (hence in +the $L^2$ norm since we are on a bounded set) by simple functions (convex combination of step functions). But a step function is simply the CDF of -a dirac distribution. Hence, we can approximate $\mathbf{F}$ arbitrarily well -in $L^2$ norm with finite support distributions. But convergence in $L^2$ norm -implies weak convergence by Cauchy-Schwarz inequality. That is, we have -a sequence $F_n$ of finite support distribution such that: +a Dirac distribution. Hence, we can approximate $\mathbf{F}$ arbitrarily well +in the $L^2$ norm with finite support distributions. But convergence in the $L^2$ norm +implies weak convergence by the Cauchy-Schwarz inequality. That is, we have +a sequence $F_n$ of finite support distributions such that: \begin{displaymath} \E_{F_n}[\max_i v_i] \xrightarrow[n\rightarrow \infty]{} \E_F[\max_i v_i] \end{displaymath} |
