summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2013-02-10 21:53:28 -0800
committerThibaut Horel <thibaut.horel@gmail.com>2013-02-10 21:53:28 -0800
commit36408fb6aab17f2fb9bcb3f87fd473a42da4fd0d (patch)
tree46d9ba5e4f73186b503bc4a94a6754399a17fe9c
parent3eab27a691c9c1319dcadbcd2b2325044b2cef14 (diff)
downloadrecommendation-36408fb6aab17f2fb9bcb3f87fd473a42da4fd0d.tar.gz
Another pass on section 4
-rw-r--r--main.tex383
1 files changed, 181 insertions, 202 deletions
diff --git a/main.tex b/main.tex
index 903d1ed..f057b4f 100644
--- a/main.tex
+++ b/main.tex
@@ -1,34 +1,33 @@
\label{sec:main}
-In this section we present our mechanism for \EDP. Prior approaches to budget
-feasible mechanisms build upon the full information case which we discuss
-first.
+In this section we present our mechanism for \SEDP. Prior approaches to budget
+feasible mechanisms for sudmodular maximization build upon the full information
+case which we discuss first.
-\paragraph{Full information submodular maximization}
+\subsection*{Submodular maximization in the full-information case}
-The best known approximation ratio $\frac{e}{e-1}$ for maximizing a submodular
-set function under a cardinality constraint is attained by using a greedy
-algorithm~\cite{nemhauser}. In this algorithm, elements are added to the
-solution set according to the following greedy selection rule. Assume that
-$S\subseteq \mathcal{N}$ is the solution set constructed thus far; then, the
-next element $i$ to be included in $S$ is the one with the highest
-\emph{marginal-value}: $V(S\cup\{i\}) - V(S)$. This process terminates when the
-cardinality of $S$ reaches the cardinality limit.
-
-For submodular maximization under a budget constraint \eqref{eq:non-strategic},
-the greedy algorithm can be generalized: if $S\subset\mathcal{N}$ is the set
-constructed thus far, the element $i$ to be included is the one which maximizes
-the \emph{marginal-value-per-cost}:
+In the full-information case, submodular maximization under a budget constraint
+\eqref{eq:non-strategic} relies on a greedy heuristic in which elements are
+added to the solution set according to the following greedy selection rule.
+Assume that $S\subseteq\mathcal{N}$ is the set constructed thus far, the next
+element $i$ to be included is the one which maximizes the
+\emph{marginal-value-per-cost}:
\begin{align}
i = \argmax_{j\in\mathcal{N}\setminus S}\frac{V(S\cup\{i\}) - V(S)}{c_i}\label{greedy}
\end{align}
This is repeated until the sum of costs of elements in $S$ reaches the budget
-constraint $B$. Unfortunately, the greedy algorithm alone has an unbounded
-approximation ratio~\cite{khuller}. However, let $i^*
-= \argmax_{i\in\mathcal{N}} V(\{i\})$ be the element of maximum value, and
-$S_G$ the set computed by the greedy algorithm, taking the maximum between
-$V(S_G)$ and $V(i^*)$ yields a bounded approximation ratio. Formally, the
-following lemma holds.
+constraint $B$. We denote by $S_G$ the set constructed by this heuristic. As
+shown in \cite{khuller}, $V(S_G)$ has an unbounded approximation ratio to $OPT$.
+However, let $i^* = \argmax_{i\in\mathcal{N}} V(\{i\})$ be the element of
+maximum value, then defining:
+\begin{displaymath}
+S^* = \left\{
+\begin{aligned}
+\{i^*\}\quad &\textrm{if}\; V(\{i^*\}) \geq V(S_G)\\
+S_G\quad&\textrm{otherwise}
+\end{aligned}\right.,
+\end{displaymath}
+$V(S^*)$ has a bounded approximation to $OPT$. Formally, the following lemma holds.
\begin{lemma}~\cite{chen}\label{lemma:greedy-bound}
Let $S_G$ be the set computed by the greedy algorithm and let
$i^* = \argmax_{i\in\mathcal{N}} V(i).$ We have:
@@ -41,107 +40,88 @@ the afore-mentioned approach. A more sophisticated
algorithm~\cite{sviridenko-submodular} can attain the optimal $\frac{e}{e-1}$
approximation ratio
-\fussy
-\paragraph{Strategic case}
-We could design the allocation function of our mechanism by following the full
-information approach: allocating to agent $i^*$ when $V(i^*)\geq V(S_G)$ and to
-$S_G$ otherwise. However, \citeN{singer-influence} notes that this
-allocation function is not monotonous, and thus breaks incentive compatibility
-by Myerson's theorem (Theorem~\ref{thm:myerson}).
+\subsection*{Submodular maximization in the strategic case}
+
+Following the full information approach, allocating to agent $i^*$ when
+$V(i^*)\geq V(S_G)$ and to $S_G$ otherwise, breaks incentive compatibility.
+Indeed, \citeN{singer-influence} notes that this allocation function is not
+monotone, which implies by Myerson's theorem (Theorem~\ref{thm:myerson}) that
+the resulting mechanism is not truthful.
Let us denote by $OPT_{-i^*}$ the optimal value achievable in the
full-information case after removing $i^*$ from the set $\mathcal{N}$:
-\begin{equation}
+\begin{equation}\label{eq:opt-reduced}
OPT_{-i^*} \defeq \max_{S\subseteq\mathcal{N}\setminus\{i^*\}} \Big\{V(S) \;\Big| \;
\sum_{i\in S}c_i\leq B\Big\}
\end{equation}
-\citeN{singer-mechanisms} and \citeN{chen} prove that allocating to $i^*$ when
-$V(i^*)\geq C.OPT_{-i^*}$ (for some constant $C$) and to $S_G$ otherwise yields
-a 8.34-approximation mechanism. However, $OPT_{-i^*}$, the solution of the
-full-information case, cannot be computed in poly-time when the underlying
-problem is NP-hard (unless P=NP), as is the case for \EDP{}.
+\citeN{singer-mechanisms} and \citeN{chen} prove that using the following
+allocation:
+\begin{displaymath}
+\textrm{allocate to}\quad\left\{
+\begin{aligned}
+i^*\quad&\textrm{if}\; V(i^*)\geq C.OPT_{-i^*} \\
+S_G\quad&\textrm{otherwise}
+\end{aligned}
+\right.
+\end{displaymath}
+yields a 8.34-approximation mechanism for an appropriate $C$ (see also
+Algorithm \ref{mechanism}. However, $OPT_{-i^*}$, the maximum value attainable
+in the full-information case, cannot be computed in poly-time when the
+underlying problem is NP-hard (unless P=NP), as is the case for \SEDP{}.
-Instead, \citeN{singer-mechanisms} and Chen \emph{et al.}~\cite{chen}
-suggest to replace $OPT_{-i^*}$ by a quantity $L^*$ satisfying the following
-properties:
+Instead, \citeN{singer-mechanisms} and \citeN{chen}
+suggest to replace $OPT_{-i^*}$ by a quantity $OPT'_{-i^*}$ satisfying the
+following properties:
\begin{itemize}
- \item $L^*$ must not depend on agent $i^*$'s reported
- cost and must be monotonous: it can only increase when agents decrease
+ \item $OPT'_{-i^*}$ must not depend on agent $i^*$'s reported
+ cost and must be monotone: it can only increase when agents decrease
their costs. This is to ensure the monotonicity of the allocation function.
- \item $L^*$ must be close to $OPT_{-i^*}$ to maintain a bounded
+ \item $OPT'_{-i^*}$ must be close to $OPT_{-i^*}$ to maintain a bounded
approximation ratio.
- \item $L^*$ must be computable in polynomial time.
+ \item $OPT'_{-i^*}$ must be computable in polynomial time.
\end{itemize}
-Such a quantity $L^*$ can be found by considering relaxations of the
-optimization problem. A function $R:[0, 1]^{n}\to\reals_+$ defined on the
-hypercube $[0, 1]^{n}$ is a \emph{fractional relaxation} of $V$ over the set
-$\mathcal{N}$ if $R(\id_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where $\id_S$
+Such a quantity can be found by considering relaxations of the
+optimization problem \eqref{eq:opt-reduced}. A function $L:[0, 1]^{n}\to\reals_+$ defined on the hypercube $[0, 1]^{n}$ is a \emph{fractional relaxation} of $V$ over the set
+$\mathcal{N}$ if $L(\id_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where $\id_S$
denotes the indicator vector of $S$. The optimization program
\eqref{eq:non-strategic} extends naturally to such relaxations:
\begin{align}
OPT' = \argmax_{\lambda\in[0, 1]^{n}}
- \left\{R(\lambda) \mid \sum_{i=1}^{n} \lambda_i c_i
+ \left\{L(\lambda) \Big| \sum_{i=1}^{n} \lambda_i c_i
\leq B\right\}\label{relax}
\end{align}
-In \cite{singer-mechanisms} and \cite{chen}, using $L^* = OPT'_{-i^*}$, when
-$R$ is the \emph{multilinear extension} of $V$ (see
-Section~\ref{sec:relaxation}) yields an constant-approximation mechanism for
-\textsc{Knapsack}.
+One of the main technical contribution of \citeN{chen} and
+\citeN{singer-influence} is to come up with appropriate relaxations for
+\textsc{Knapsack} and \textsc{Coverage} respectively.
-In \cite{singer-influence}, the optimization program \eqref{relax}
-cannot be solved efficiently when $R$ is the multilinear extension of $V$.
-Consequently, Singer introduces a second relaxation which he relates to the
-multilinear extension through the \emph{pipage rounding} framework of \citeN{pipage}.
+\subsection*{Our approach}
-\paragraph{Our approach}
-Following Chen \citeN{chen} and \citeN{singer-influence},
-we in turn introduce a relaxation $L$ specifically tailored to the value
-function of \EDP:
+\sloppy
+We introduce a relaxation $L$ specifically tailored to the value function of
+\SEDP:
\begin{equation}\label{eq:our-relaxation}
\forall\,\lambda\in[0,1]^n,\quad L(\lambda) \defeq
\log\det\left(I_d + \sum_{i\in\mathcal{N}} \lambda_i x_i\T{x_i}\right),
\end{equation}
-and then use $L^* = OPT'_{-i^*}$ with $R=L$ in our mechanism. The function $L$
-is well-known to be concave and even self-concordant (see \emph{e.g.},
-\cite{boyd2004convex}). In this case, the analysis of Newton's method for
-self-concordant functions in \cite{boyd2004convex}, shows that finding the
-maximum of $L$ to any precision $\varepsilon$ can be done in
+The function $L$ is well-known to be concave and even self-concordant (see
+\emph{e.g.}, \cite{boyd2004convex}). In this case, the analysis of Newton's
+method for self-concordant functions in \cite{boyd2004convex}, shows that
+finding the maximum of $L$ to any precision $\varepsilon$ can be done in
$O(\log\log\varepsilon^{-1})$ iterations. Being the solution to a maximization
problem, $L^*$ satisfies the required monotonicity property. The main challenge
-will be to prove that $OPT'_{-i^*}$, for the relaxation $R=L$, is close to
-$OPT$. As in~\cite{singer-influence} this will be done by using the
-\emph{pipage rounding} framework of~\citeN{pipage} and forms the technical bulk
-of the paper.
+will be to prove that $OPT'_{-i^*}$, for our relaxation $L$, is close to $OPT$.
-The resulting mechanism for \EDP{} is composed of
-\begin{itemize}
-\item
- the allocation function
-presented in Algorithm~\ref{mechanism}, and
-\item
-the payment function which pays each
-allocated agent $i$ her threshold payment as described in Myerson's Theorem.
-%The constant $C$ is an absolute
-%constant that determined in Section~\ref{sec:proofofmainthm}
-%(see \eqref{eq:c}).
- In the case where
-$\{i^*\}$ is the allocated set, her threshold payment is $B$ (she would be have
-been dropped on line 1 of Algorithm~\ref{mechanism} had she reported a higher
-cost). A closed-form formula for threshold payment when $S_G$ is the allocated
-set can be found in~\cite{singer-mechanisms}.
-\end{itemize}
-
-\begin{algorithm}[!t]
- \caption{Mechanism for \EDP{}}\label{mechanism}
+\begin{algorithm}[t]
+ \caption{Mechanism for \SEDP{}}\label{mechanism}
\begin{algorithmic}[1]
\State $\mathcal{N} \gets \mathcal{N}\setminus\{i\in\mathcal{N} : c_i > B\}$
\State $i^* \gets \argmax_{j\in\mathcal{N}}V(j)$
- \State $L^* \gets \argmax_{\lambda\in[0,1]^{n}} \{L(\lambda)
+ \State $OPT'_{-i^*} \gets \argmax_{\lambda\in[0,1]^{n}} \{L(\lambda)
\mid \lambda_{i^*}=0,\sum_{i \in \mathcal{N}\setminus\{i^*\}}c_i\lambda_i\leq B\}$
\Statex
- \If{$L^* < C \cdot V(i^*)$} \label{c}
+ \If{$OPT'_{-i^*} < C \cdot V(i^*)$} \label{c}
\State \textbf{return} $\{i^*\}$
\Else
\State $i \gets \argmax_{1\leq j\leq n}\frac{V(j)}{c_j}$
@@ -155,6 +135,19 @@ set can be found in~\cite{singer-mechanisms}.
\EndIf
\end{algorithmic}
\end{algorithm}
+
+\fussy
+In summary, the resulting mechanism for \EDP{} is composed of
+\begin{itemize}
+\item the allocation function presented in Algorithm~\ref{mechanism}, and
+\item the payment function which pays each allocated agent $i$ her threshold
+payment as described in Myerson's Theorem. In the case where $\{i^*\}$ is the
+allocated set, her threshold payment is $B$ (she would be have been dropped on
+line 1 of Algorithm~\ref{mechanism} had she reported a higher cost).
+A closed-form formula for threshold payments when $S_G$ is the allocated set can
+be found in~\cite{singer-mechanisms}.
+\end{itemize}
+
We can now state our main result:
\begin{theorem}\label{thm:main}
\sloppy
@@ -171,22 +164,12 @@ We can now state our main result:
\end{theorem}
\fussy
-
-%\stratis{Add lowerbound here too.}
-%Note that this implies we construct a poly-time mechanism with accuracy arbitrarily close to 19.68, by taking $\varepsilon = \ldots$\stratis{fix me}.
In addition, we prove the following simple lower bound.
\begin{theorem}\label{thm:lowerbound}
There is no $2$-approximate, truthful, budget feasible, individually rational mechanism for EDP.
\end{theorem}
-%\stratis{move the proof as appropriate}
-\begin{proof}
-Suppose, for contradiction, that such a mechanism exists. Consider two experiments with dimension $d=2$, such that $x_1 = e_1=[1 ,0]$, $x_2=e_2=[0,1]$ and $c_1=c_2=B/2+\epsilon$. Then, one of the two experiments, say, $x_1$, must be in the set selected by the mechanism, otherwise the ratio is unbounded, a contradiction. If $x_1$ lowers its value to $B/2-\epsilon$, by monotonicity it remains in the solution; by threshold payment, it is paid at least $B/2+\epsilon$. So $x_2$ is not included in the solution by budget feasibility and individual rationality: hence, the selected set attains a value $\log2$, while the optimal value is $2\log 2$.
-\end{proof}
-
\subsection{Proof of Theorem~\ref{thm:main}}\label{sec:proofofmainthm}
-%\stratis{individual rationality???}
-%The proof of the properties of the mechanism is broken down into lemmas.
We now present the proof of Theorem~\ref{thm:main}. Truthfulness and individual
rationality follow from monotonicity and threshold payments. Monotonicity and
@@ -228,85 +211,74 @@ following lemma which establishes that $OPT'$, the optimal value \eqref{relax}
\end{lemma}
The proof of Lemma~\ref{lemma:relaxation} is our main technical contribution, and can be found in Section \ref{sec:relaxation}.
-\paragraph{Finishing Proof of Theorem~\ref{thm:main} }
-Note that the lower bound $OPT' \geq OPT
-$ also holds trivially, as $L$ is a fractional relaxation of $V$ over $\mathcal{N}$.
-Using Lemma~\ref{lemma:relaxation} we can complete the proof of Theorem~\ref{thm:main} by showing that, %\begin{lemma}\label{lemma:approx}
- %C_{\textrm{max}} = \max\left(1+C,\frac{3e}{e-1}\left( 1 + \frac{8e}{C
- %(e-1) -10e +2}\right)\right)
- for any $\varepsilon > 0$, if $OPT_{-i}'$, the optimal value of $L$ when $i^*$ is excluded from $\mathcal{N}$, has been computed to a precision
- $\varepsilon$, then the set $S^*$ allocated by the mechanism is such that:
- \begin{align}
- OPT
- \leq \frac{10e\!-\!3 + \sqrt{64e^2\!-\!24e\!+\!9}}{2(e\!-\!1)} V(S^*)\!+\! \varepsilon \label{approxbound}
- \end{align}
-%\end{lemma}
- To see this, let $OPT_{-i^*}'$ be the true maximum value of $L$ subject to $\lambda_{i^*}=0$, $\sum_{i\in \mathcal{N}\setminus{i^*}}c_i\leq B$. Assume that on line 3 of algorithm~\ref{mechanism}, a quantity
- $\tilde{L}$ such that $\tilde{L}-\varepsilon\leq OPT_{-i^*}' \leq
- \tilde{L}+\varepsilon$ has been computed (Lemma~\ref{lemma:complexity}
- states that this is computed in time within our complexity guarantee).
- If the condition on line 3 of the algorithm holds, then:
- \begin{displaymath}
- V(i^*) \geq \frac{1}{C}OPT_{-i^*}'-\frac{\varepsilon}{C} \geq
- \frac{1}{C}OPT_{-i^*} -\frac{\varepsilon}{C}
- \end{displaymath}
- as $L$ is a fractional relaxation of $V$. Also,
- \begin{displaymath}
- OPT \leq OPT_{-i^*} + V(i^*)
- \end{displaymath}
- Hence:
- \begin{equation}\label{eq:bound1}
- OPT\leq (1+C)V(i^*) + \varepsilon
- \end{equation}
- Note that $OPT_{-i^*}'\leq OPT'$. If the condition does not hold,
- from Lemmas \ref{lemma:relaxation} and \ref{lemma:greedy-bound}:
- \begin{align*}
- V(i^*) & \stackrel{}\leq \frac{1}{C}OPT_{-i^*}'+ \frac{\varepsilon}{C}\leq \frac{1}{C}
- \big(2 OPT + 2 V(i^*)\big) + \frac{\varepsilon}{C}\\
- & \leq \frac{1}{C}\left(\frac{2e}{e-1}\big(3 V(S_G)
- + 2 V(i^*)\big)
- + 2 V(i^*)\right) + \frac{\varepsilon}{C}
- \end{align*}
- Thus, if $C$ is such that $C(e-1) -6e +2 > 0$,
- \begin{align*}
- V(i^*) \leq \frac{6e}{C(e-1)- 6e + 2} V(S_G)
- + \frac{(e-1)\varepsilon}{C(e-1)- 6e + 2}
- \end{align*}
- Finally, using again Lemma~\ref{lemma:greedy-bound}, we get:
- \begin{equation}\label{eq:bound2}
- OPT(V, \mathcal{N}, B) \leq \frac{3e}{e-1}\left( 1 + \frac{4e}{C
- (e-1) -6e +2}\right) V(S_G)
- +\frac{2e\varepsilon}{C(e-1)- 6e + 2}
- \end{equation}
- To minimize the coefficients of $V_{i^*}$ and $V(S_G)$ in \eqref{eq:bound1} and \eqref{eq:bound2} respectively,
- we wish to chose for $C=C^*$ such that:
- \begin{displaymath}
- C^* = \argmin_C
- \max\left(1+C,\frac{3e}{e-1}\left( 1 + \frac{4e}{C (e-1) -6e +2}
- \right)\right)
- \end{displaymath}
- This equation has two solutions. Only one of those is such that
- $C(e-1) -6e +2 \geq 0$.
- %which is needed in the above derivation.
-This solution is:
- \begin{align}
- C^* = \frac{8e-1 + \sqrt{64e^2-24e + 9}}{2(e-1)} \label{eq:c}
- \end{align}
- For this solution,
- % \begin{displaymath}
- $ \frac{2e\varepsilon}{C^*(e-1)- 6e + 2}\leq \varepsilon. $
- % \end{displaymath}
- Placing the expression of $C^*$ in \eqref{eq:bound1} and \eqref{eq:bound2}
- gives the approximation ratio in \eqref{approxbound}, and concludes the proof of Theorem~\ref{thm:main}.\hspace*{\stretch{1}}\qed
-%\end{proof}
+Using Lemma~\ref{lemma:relaxation} we can complete the proof of
+Theorem~\ref{thm:main} by showing that, for any $\varepsilon > 0$, if
+$OPT_{-i}'$, the optimal value of $L$ when $i^*$ is excluded from
+$\mathcal{N}$, has been computed to a precision $\varepsilon$, then the set
+$S^*$ allocated by the mechanism is such that:
+\begin{equation} \label{approxbound}
+OPT
+\leq \frac{10e\!-\!3 + \sqrt{64e^2\!-\!24e\!+\!9}}{2(e\!-\!1)} V(S^*)\!
++ \! \varepsilon
+\end{equation}
+To see this, let $OPT_{-i^*}'$ be the true maximum value of $L$ subject to
+$\lambda_{i^*}=0$, $\sum_{i\in \mathcal{N}\setminus{i^*}}c_i\leq B$. Assume
+that on line 3 of algorithm~\ref{mechanism}, a quantity $\tilde{L}$ such that
+$\tilde{L}-\varepsilon\leq OPT_{-i^*}' \leq \tilde{L}+\varepsilon$ has been
+computed (Lemma~\ref{lemma:complexity} states that this is computed in time
+within our complexity guarantee). If the condition on line 3 of the algorithm
+holds, then:
+\begin{displaymath}
+ V(i^*) \geq \frac{1}{C}OPT_{-i^*}'-\frac{\varepsilon}{C} \geq
+ \frac{1}{C}OPT_{-i^*} -\frac{\varepsilon}{C}
+\end{displaymath}
+as $L$ is a fractional relaxation of $V$. Also, $OPT \leq OPT_{-i^*} + V(i^*)$,
+hence:
+\begin{equation}\label{eq:bound1}
+ OPT\leq (1+C)V(i^*) + \varepsilon
+\end{equation}
+Note that $OPT_{-i^*}'\leq OPT'$. If the condition does not hold, from Lemmas
+\ref{lemma:relaxation} and \ref{lemma:greedy-bound}:
+\begin{align*}
+ V(i^*) & \stackrel{}\leq \frac{1}{C}OPT_{-i^*}' + \frac{\varepsilon}{C}
+ \leq \frac{1}{C} \big(2 OPT + 2 V(i^*)\big) + \frac{\varepsilon}{C}\\
+ & \leq \frac{1}{C}\left(\frac{2e}{e-1}\big(3 V(S_G)
+ + 2 V(i^*)\big) + 2 V(i^*)\right) + \frac{\varepsilon}{C}
+\end{align*}
+Thus, if $C$ is such that $C(e-1) -6e +2 > 0$,
+\begin{align*}
+ V(i^*) \leq \frac{6e}{C(e-1)- 6e + 2} V(S_G)
+ + \frac{(e-1)\varepsilon}{C(e-1)- 6e + 2}
+\end{align*}
+Finally, using again Lemma~\ref{lemma:greedy-bound}, we get:
+\begin{equation}\label{eq:bound2}
+ OPT(V, \mathcal{N}, B) \leq
+ \frac{3e}{e-1}\left( 1 + \frac{4e}{C (e-1) -6e +2}\right) V(S_G)
+ + \frac{2e\varepsilon}{C(e-1)- 6e + 2}
+\end{equation}
+To minimize the coefficients of $V_{i^*}$ and $V(S_G)$ in \eqref{eq:bound1}
+and \eqref{eq:bound2} respectively, we wish to chose for $C=C^*$ such that:
+\begin{displaymath}
+ C^* = \argmin_C
+ \max\left(1+C,\frac{3e}{e-1}\left( 1 + \frac{4e}{C (e-1) -6e +2}
+ \right)\right)
+\end{displaymath}
+This equation has two solutions. Only one of those is such that $C(e-1) -6e
++2 \geq 0$. This solution is:
+\begin{displaymath}
+ C^* = \frac{8e-1 + \sqrt{64e^2-24e + 9}}{2(e-1)} \label{eq:c}
+\end{displaymath}
+For this solution, $\frac{2e\varepsilon}{C^*(e-1)- 6e + 2}\leq \varepsilon.$
+Placing the expression of $C^*$ in \eqref{eq:bound1} and \eqref{eq:bound2}
+gives the approximation ratio in \eqref{approxbound}, and concludes the proof
+of Theorem~\ref{thm:main}.\hspace*{\stretch{1}}\qed
\subsection{Proof of Lemma~\ref{lemma:relaxation}}\label{sec:relaxation}
-We need to prove that for our relaxation $L$ \eqref{eq:our-relaxation}, $OPT'$
-is close to $OPT$ as stated in Lemma~\ref{lemma:relaxation}. As discussed at
-the beginning of Section~\ref{sec:main}, following the same approach as
-\citeN{singer-influence}, we make use of the \emph{pipage
-rounding} framework of \citeN{pipage}.
+We need to prove that for our relaxation $L$ given by
+\eqref{eq:our-relaxation}, $OPT'$ is close to $OPT$ as stated in
+Lemma~\ref{lemma:relaxation}. Our analysis follows the \emph{pipage rounding}
+framework of \citeN{pipage}.
This framework uses the \emph{multi-linear} extension $F$ of the submodular
function $V$. Let $P_\mathcal{N}^\lambda(S)$ be the probability of choosing the set $S$ if we select each element $i$ in $\mathcal{N}$ independently with probability $\lambda_i$:
@@ -315,11 +287,11 @@ function $V$. Let $P_\mathcal{N}^\lambda(S)$ be the probability of choosing the
\prod_{i\in\mathcal{N}\setminus S}( 1 - \lambda_i).
\end{displaymath}
Then, the \emph{multi-linear} extension $F$ is defined by:
-\begin{equation}\label{eq:multilinear}
+\begin{displaymath}
F(\lambda)
\defeq \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\big[V(S)\big]
= \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S)
-\end{equation}
+\end{displaymath}
For \EDP{} the multi-linear extension can be written:
\begin{equation}\label{eq:multi-linear-logdet}
@@ -334,17 +306,18 @@ expectation and the $\log\det$ in \eqref{eq:multi-linear-logdet}:
P_\mathcal{N}^\lambda}\bigg[I_d + \sum_{i\in S} x_i\T{x_i} \bigg]\right).
\end{displaymath}
-The pipage rounding framework then proceeds as follows:
+The proof proceeds as follows:
\begin{itemize}
-\item First, we prove (Lemma \ref{lemma:rounding}) that $F$ admits the
-following rounding property: let $\lambda$ be a feasible element of $[0,1]^n$,
-it is possible to trade one fractional component of $\lambda$ for another until
-one of them becomes integral, obtaining a new element $\tilde{\lambda}$ which
-is both feasible and for which $F(\lambda)\geq F(\lambda)$. Here, by
-feasibility of a point $\lambda$, we mean that it satisfies the budget
-constraint $\sum_{i=1}^n \lambda_i c_i \leq B$. This rounding property is
-referred to in the literature as \emph{cross-convexity} (see, \emph{e.g.},
-\cite{dughmi}), or $\varepsilon$-convexity by \citeN{pipage}.
+\item First, we prove that $F$ admits the following rounding property: let
+$\lambda$ be a feasible element of $[0,1]^n$, it is possible to trade one
+fractional component of $\lambda$ for another until one of them becomes
+integral, obtaining a new element $\tilde{\lambda}$ which is both feasible and
+for which $F(\tilde{\lambda})\geq F(\lambda)$. Here, by feasibility of a point
+$\lambda$, we mean that it satisfies the budget constraint $\sum_{i=1}^n
+\lambda_i c_i \leq B$. This rounding property is referred to in the literature
+as \emph{cross-convexity} (see, \emph{e.g.}, \cite{dughmi}), or
+$\varepsilon$-convexity by \citeN{pipage}. This is stated and proven in
+Lemma~\ref{lemma:rounding} and allows us to bound $F$ in terms of $OPT$.
\item Next, we prove the central result of bounding $L$ appropriately in terms
of the multi-linear relaxation $F$ (Lemma \ref{lemma:relaxation-ratio}).
\item Finally, we conclude the proof of Lemma~\ref{lemma:relaxation} by
@@ -352,8 +325,7 @@ combining Lemma~\ref{lemma:rounding} and Lemma~\ref{lemma:relaxation-ratio}.
\end{itemize}
\begin{comment}
-Formally, %this property states that
-if we define:
+Formally, if we define:
\begin{displaymath}
\tilde{F}_\lambda(\varepsilon) \defeq F\big(\lambda + \varepsilon(e_i
- e_j)\big)
@@ -371,10 +343,7 @@ or the $j$-th component of $\lambda$ becomes integral.
For any feasible $\lambda\in[0,1]^{n}$, there exists a feasible
$\bar{\lambda}\in[0,1]^{n}$ such that at most one of its components is
fractional %, that is, lies in $(0,1)$ and:
- and
- %\begin{displaymath}
- $ F_{\mathcal{N}}(\lambda)\leq F_{\mathcal{N}}(\bar{\lambda})$.
- %\end{displaymath}
+ and $F_{\mathcal{N}}(\lambda)\leq F_{\mathcal{N}}(\bar{\lambda})$.
\end{lemma}
\begin{proof}
We give a rounding procedure which, given a feasible $\lambda$ with at least
@@ -542,11 +511,7 @@ In particular,
\begin{displaymath}
\T{x_i}A(S)^{-1}x_i \leq \norm{x_i}_2^2 \leq 1
\end{displaymath}
- Moreover:
- \begin{displaymath}
- \forall x\leq 1,\; \log(1+x)\geq x
- \end{displaymath}
- Hence:
+ Moreover, $\log(1+x)\geq x$ for all $x\leq 1$. Hence:
\begin{displaymath}
\partial_i F(\lambda) \geq
\frac{1}{4}
@@ -596,7 +561,7 @@ Having bound the ratio between the partial derivatives, we now bound the ratio $
function $V$. Hence, the ratio is equal to 1 on the vertices.
\end{proof}
-\paragraph{Proof of Lemma~\ref{lemma:relaxation}}
+\begin{proof}[of Lemma~\ref{lemma:relaxation}]
Let us consider a feasible point $\lambda^*\in[0,1]^{n}$ such that $L(\lambda^*)
= OPT'$. By applying Lemma~\ref{lemma:relaxation-ratio}
and Lemma~\ref{lemma:rounding} we get a feasible point $\bar{\lambda}$ with at most
@@ -618,9 +583,23 @@ Let us consider a feasible point $\lambda^*\in[0,1]^{n}$ such that $L(\lambda^*)
Note that since $\bar{\lambda}$ is feasible, $S$ is also feasible and
$V(S)\leq OPT$. Hence:
\begin{equation}\label{eq:e2}
- F(\bar{\lambda}) \leq 2 OPT
+ F(\bar{\lambda}) \leq OPT
+ \max_{i\in\mathcal{N}} V(i)
\end{equation}
Together, \eqref{eq:e1} and \eqref{eq:e2} imply the lemma. \hspace*{\stretch{1}}\qed
+\end{proof}
+\subsection{Proof of Theorem \ref{thm:lowerbound}}
+
+\begin{proof}
+Suppose, for contradiction, that such a mechanism exists. Consider two
+experiments with dimension $d=2$, such that $x_1 = e_1=[1 ,0]$, $x_2=e_2=[0,1]$
+and $c_1=c_2=B/2+\epsilon$. Then, one of the two experiments, say, $x_1$, must
+be in the set selected by the mechanism, otherwise the ratio is unbounded,
+a contradiction. If $x_1$ lowers its value to $B/2-\epsilon$, by monotonicity
+it remains in the solution; by threshold payment, it is paid at least
+$B/2+\epsilon$. So $x_2$ is not included in the solution by budget feasibility
+and individual rationality: hence, the selected set attains a value $\log2$,
+while the optimal value is $2\log 2$.
+\end{proof}