summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2012-11-03 16:48:32 +0100
committerThibaut Horel <thibaut.horel@gmail.com>2012-11-03 16:48:32 +0100
commitd9c6b8d8e3a3c5ae02e80a9f8db45e845a1a5235 (patch)
tree4e9c6bc3f6f38c8f5afd353007a2be1fb0789831
parentf64ff3ca389ec33c240bf6d872033a4bc6e4d9c6 (diff)
downloadrecommendation-d9c6b8d8e3a3c5ae02e80a9f8db45e845a1a5235.tar.gz
More fixes on section 3
-rw-r--r--main.tex105
1 files changed, 69 insertions, 36 deletions
diff --git a/main.tex b/main.tex
index 178ad1f..56fba56 100644
--- a/main.tex
+++ b/main.tex
@@ -73,15 +73,23 @@ and to $S_G$ otherwise yields a 8.34 approximation mechanism. For specific
problems, Chen et al. \cite{chen} for knapsack on one hand, Singer
\cite{singer-influence} for maximum coverage on the other hand, instead compare
$V(i^*)$ to $OPT(R_{\mathcal{N}\setminus\{i\}}, B)$, where $R_\mathcal{N}$
-denotes a fractional relaxation of the function $V$ over the set $\mathcal{N}$.
-
-We say that $R_\mathcal{N}$ is a fractional relaxation of $V$ over the set
-$\mathcal{N}$, if (a) $R_\mathcal{N}$ is a function defined on the hypercube
-$[0, 1]^{|\mathcal{N}|}$ and (b) for all $S\subseteq\mathcal{N}$, if
+denotes a fractional relaxation of the function $V$ over the set
+$\mathcal{N}$. We say that $R_\mathcal{N}$ is a fractional relaxation of $V$
+over the set $\mathcal{N}$, if (a) $R_\mathcal{N}$ is a function defined on the
+hypercube $[0, 1]^{|\mathcal{N}|}$ and (b) for all $S\subseteq\mathcal{N}$, if
$\mathbf{1}_S$ denotes the indicator vector of $S$ we have
-$R_\mathcal{N}(\mathbf{1}_S) = V(S)$. A relaxation which is commonly used, due
-to its well-behaved properties in the context of maximizing submodular
-functions, is the \emph{multi-linear} extension:
+$R_\mathcal{N}(\mathbf{1}_S) = V(S)$. The optimization program
+\eqref{eq:non-strategic} extends naturally to relaxations:
+\begin{displaymath}
+ OPT(R_\mathcal{N}, B) = \argmax_{\lambda\in[0, 1]^{|\mathcal{N}|}}
+ \left\{R_\mathcal{N}(\lambda) \mid \sum_{i=1}^{|\mathcal{N}|} \lambda_i c_i
+ \leq B\right\}
+\end{displaymath}
+
+
+A relaxation which is commonly used, due to its well-behaved properties in the
+context of maximizing submodular functions, is the \emph{multi-linear}
+extension:
\begin{equation}\label{eq:multilinear}
F_\mathcal{N}^V(\lambda)
= \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[V(S)\right]
@@ -165,21 +173,28 @@ We can now state the main result of this section:
\subsection{Proof of theorem~\ref{thm:main}}
-\stratis{Explain what are the main steps in the proof}
+The proof of the properties of the mechanism is broken down into lemmas.
+Monotonicity and budget feasibility follows from the analysis of Chen et al.
+\cite{chen}. The proof of the approximation ratio is done in
+lemma~\ref{lemma:approx} and uses a bound on our concave relaxation
+$L_\mathcal{N}$ (lemma~\ref{lemma:relaxation}) which is our main technical
+contribution. The proof of this lemma is done in a dedicated section
+(\ref{sec:relaxation}).
+
\begin{lemma}\label{lemma:monotone}
The mechanism is monotone.
\end{lemma}
\begin{proof}
- Suppose, for contradiction, that there exists a user $i$ that has been
- selected by the mechanism and that would not be selected had he reported
- a cost $c_i'\leq c_i$ (all the other costs staying the same).
+ Suppose, for contradiction, that there exists an agent $i$ that has been
+ selected by the mechanism and that would not have been selected had she
+ reported a cost $c_i'\leq c_i$ (all the other costs staying the same).
If $i\neq i^*$ and $i$ has been selected, then we are in the case where
- $L_\mathcal{N}(x^*) \geq C V(i^*)$ and $i$ was included in the result set by the greedy
- part of the mechanism. By reporting a cost $c_i'\leq c_i$, using the
- submodularity of $V$, we see that $i$ will satisfy the greedy selection
- rule:
+ $L_\mathcal{N}(x^*) \geq C V(i^*)$ and $i$ was included in the result set
+ by the greedy part of the mechanism. By reporting a cost $c_i'\leq c_i$,
+ using the submodularity of $V$, we see that $i$ will satisfy the greedy
+ selection rule:
\begin{displaymath}
i = \argmax_{j\in\mathcal{N}\setminus S} \frac{V(S\cup\{j\})
- V(S)}{c_j}
@@ -194,10 +209,11 @@ The mechanism is monotone.
\end{align*}
Hence $i$ will still be included in the result set.
- If $i = i^*$, $i$ is included iff $L_\mathcal{N}(x^*) \leq C V(i^*)$. Reporting $c_i'$
- instead of $c_i$ does not change the value $V(i^*)$ nor $L_\mathcal{N}(x^*)$ (which is
- computed over $\mathcal{N}\setminus\{i^*\}$). Thus $i$ is still included by
- reporting a different cost.
+ If $i = i^*$, $i$ is included iff $L_\mathcal{N}(x^*) \leq C V(i^*)$.
+ Reporting $c_i'$ instead of $c_i$ does not change the value $V(i^*)$ nor
+ $L_\mathcal{N}(x^*)$ (which is computed over
+ $\mathcal{N}\setminus\{i^*\}$). Thus $i$ is still included by reporting
+ a different cost.
\end{proof}
\begin{lemma}\label{lemma:budget-feasibility}
@@ -208,14 +224,13 @@ The mechanism is budget feasible.
Let us denote by $S_G$ the set selected by the greedy heuristic in the
mechanism of Algorithm~\ref{mechanism}. Let $i\in S_G$, let us also denote by
$S_i$ the solution set that was selected by the greedy heuristic before $i$ was
-added. Recall from \cite{chen} that the following holds for any submodular
-function: if the point $i$ was selected by the greedy heuristic, then:
+added. We use the following result from Chen et al. \cite{chen}, which bounds
+the reported cost of an agent selected by the greedy heuristic, and holds for
+any submodular function $V$:
\begin{equation}\label{eq:budget}
c_i \leq \frac{V(S_i\cup\{i\}) - V(S)}{V(S_G)} B
\end{equation}
-\stratis{Use ``recall'' only for something the reader is likely to know. }
-
Assume now that our mechanism selects point $i^*$. In this case, his payment
his $B$ and the mechanism is budget-feasible.
@@ -227,7 +242,7 @@ shows that the threshold payment of user $i$ is bounded by:
Hence, the total payment is bounded by:
\begin{displaymath}
- \sum_{i\in S_M} \frac{V(S_i\cup\{i\}) - V(S_i)}{V(S_G)} B \leq B
+ \sum_{i\in S_M} \frac{V(S_i\cup\{i\}) - V(S_i)}{V(S_G)} B \leq B\qed
\end{displaymath}
\end{proof}
@@ -299,19 +314,37 @@ gives the result of the theorem.
\subsection{An approximation ratio for $L_\mathcal{N}$}\label{sec:relaxation}
-Our main contribution is lemma~\ref{lemma:relaxation} which gives an
-approximation ratio for $L_\mathcal{N}$ and is key in the proof of
-theorem~\ref{thm:main}.
+Since $L_\mathcal{N}$ is a relaxation of the value function $V$, we have:
+\begin{displaymath}
+ OPT(V,\mathcal{N},B) \leq OPT(L_\mathcal{N}, B)
+\end{displaymath}
+However, for the purpose of proving theorem~\ref{thm:main}, we need to bound
+$L_\mathcal{N}$ from above (up to a multiplicative factor) by $V$. We use the
+\emph{pipage rounding} method from Ageev and Sviridenko \cite{pipage}, where
+$L_\mathcal{N}$ is first bounded from above by $F_\mathcal{N}$, which is itself
+subsequently bounded by $V$. While the latter part is general an can be apply
+to the multi-linear extension of any submodular function, the former part is
+specific to our choice for the function $L_\mathcal{N}$ and is our main
+technical contribution.
-It has been observed already (see for example \cite{dughmi}) that the
-multi-linear extension presents the cross-convexity property: it is convex along
-any direction $e_i-e_j$ where $e_i$ and $e_j$ are two elements of the standard
-basis. This property allows to trade between two fractional components of
-a point without diminishing the value of the relaxation. The following lemma
-follows is inspired by a similar lemma from \cite{dughmi} but also ensures that
-the points remain feasible during the trade.
+First we prove a variant of the $\varepsilon$-convexity of the multi-linear
+extension \eqref{eq:multilinear} introduced by Ageev and Sviridenko
+\cite{pipage} which allows to trade a fractional component for another until
+one of them becomes integral, without loosing any value. This property is also
+referred to in the literature as \emph{cross-convexity} (see \emph{e.g.}
+\cite{dughmi}). Formally, this property states that if we define:
+\begin{displaymath}
+ \tilde{F}_\lambda(\varepsilon) \defeq F_\mathcal{N}\big(\lambda + \varepsilon(e_i
+ - e_j)\big)
+\end{displaymath}
+where $e_i$ and $e_j$ are two vectors of the standard basis of
+$\reals^{|\mathcal{N}|}$, then $\tilde{F}$ is convex. Hence its maximum over the interval:
+\begin{displaymath}
+ I_\lambda = \Big[\max(-\lambda_i,\lambda_j-1), \min(1-\lambda_i, \lambda_j)\Big]
+\end{displaymath}
+is attained at one of the boundaries of $I_\lambda$ for which one of the $i$-th
+or the $j$-th component of $\lambda$ becomes integral.
-\stratis{explain how the proof below relates to the $\epsilon$-convexity of sviridenko}
\begin{lemma}[Rounding]\label{lemma:rounding}
For any feasible $\lambda\in[0,1]^{|\mathcal{N}|}$, there exists a feasible
$\bar{\lambda}\in[0,1]^{|\mathcal{N}|}$ such that at most one of its component is