summaryrefslogtreecommitdiffstats
path: root/main.tex
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2013-02-10 16:40:30 -0800
committerThibaut Horel <thibaut.horel@gmail.com>2013-02-10 16:40:30 -0800
commit2435d66feaaccea1b1cdf289b9e8305692d14dd0 (patch)
treef713b24e0c127bd53decbf90167604ad73152bfa /main.tex
parent8b6da319e9ffa6ed92b0b34a0ef1771f16b18092 (diff)
downloadrecommendation-2435d66feaaccea1b1cdf289b9e8305692d14dd0.tar.gz
Fix citations in beginning of section 4
Diffstat (limited to 'main.tex')
-rw-r--r--main.tex22
1 files changed, 10 insertions, 12 deletions
diff --git a/main.tex b/main.tex
index 1f92091..8c57916 100644
--- a/main.tex
+++ b/main.tex
@@ -45,7 +45,7 @@ approximation ratio
\paragraph{Strategic case}
We could design the allocation function of our mechanism by following the full
information approach: allocating to agent $i^*$ when $V(i^*)\geq V(S_G)$ and to
-$S_G$ otherwise. However, Singer~\cite{singer-influence} notes that this
+$S_G$ otherwise. However, \citeN{singer-influence} notes that this
allocation function is not monotonous, and thus breaks incentive compatibility
by Myerson's theorem (Theorem~\ref{thm:myerson}).
@@ -55,14 +55,13 @@ full-information case after removing $i^*$ from the set $\mathcal{N}$:
OPT_{-i^*} \defeq \max_{S\subseteq\mathcal{N}\setminus\{i^*\}} \Big\{V(S) \;\Big| \;
\sum_{i\in S}c_i\leq B\Big\}
\end{equation}
-Singer~\cite{singer-mechanisms} and Chen \emph{et al.}~\cite{chen} prove that
-allocating to $i^*$ when $V(i^*)\geq C.OPT_{-i^*}$ (for some constant $C$) and
-to $S_G$ otherwise yields a 8.34-approximation mechanism. However,
-$OPT_{-i^*}$, the solution of the full-information case, cannot be computed in
-poly-time when the underlying problem is NP-hard (unless P=NP), as is the case
-for \EDP{}.
+\citeN{singer-mechanisms} and \citeN{chen} prove that allocating to $i^*$ when
+$V(i^*)\geq C.OPT_{-i^*}$ (for some constant $C$) and to $S_G$ otherwise yields
+a 8.34-approximation mechanism. However, $OPT_{-i^*}$, the solution of the
+full-information case, cannot be computed in poly-time when the underlying
+problem is NP-hard (unless P=NP), as is the case for \EDP{}.
-Instead, Singer~\cite{singer-mechanisms} and Chen \emph{et al.}~\cite{chen}
+Instead, \citeN{singer-mechanisms} and Chen \emph{et al.}~\cite{chen}
suggest to replace $OPT_{-i^*}$ by a quantity $L^*$ satisfying the following
properties:
\begin{itemize}
@@ -94,11 +93,10 @@ Section~\ref{sec:relaxation}) yields an constant-approximation mechanism for
In \cite{singer-influence}, the optimization program \eqref{relax}
cannot be solved efficiently when $R$ is the multilinear extension of $V$.
Consequently, Singer introduces a second relaxation which he relates to the
-multilinear extension through the \emph{pipage rounding} framework of Ageev and
-Sviridenko~\cite{pipage}.
+multilinear extension through the \emph{pipage rounding} framework of \citeN{pipage}.
\paragraph{Our approach}
-Following Chen \emph{et al.}~\cite{chen} and Singer~\cite{singer-influence},
+Following Chen \citeN{chen} and \citeN{singer-influence},
we in turn introduce a relaxation $L$ specifically tailored to the value
function of \EDP:
\begin{displaymath}
@@ -114,7 +112,7 @@ $O(\log\log\varepsilon^{-1})$ iterations. Being the solution to a maximization
problem, $L^*$ satisfies the required monotonicity property. The main challenge
will be to prove that $OPT'_{-i^*}$, for the relaxation $R=L$, is close to
$OPT$. As in~\cite{singer-influence} this will be done by using the
-\emph{pipage rounding} framework of~\cite{pipage} and forms the technical bulk
+\emph{pipage rounding} framework of~\citeN{pipage} and forms the technical bulk
of the paper.
The resulting mechanism for \EDP{} is composed of