summaryrefslogtreecommitdiffstats
path: root/approximation.tex
diff options
context:
space:
mode:
Diffstat (limited to 'approximation.tex')
-rwxr-xr-xapproximation.tex12
1 files changed, 7 insertions, 5 deletions
diff --git a/approximation.tex b/approximation.tex
index bab2c93..87d67d9 100755
--- a/approximation.tex
+++ b/approximation.tex
@@ -34,7 +34,7 @@ In the first part of this section, we address this issue by designing a convex r
A classical way of relaxing combinatorial optimization problems is
\emph{relaxing by expectation}, using the so-called \emph{multi-linear}
extension of the objective function $V$ (see, \emph{e.g.}, \cite{calinescu2007maximizing,vondrak2008optimal,dughmi2011convex}).
-This is because this extension can yield approximation guarantees for a wide class of combinatorial problems through \emph{pipage rounding}, a technique proposed by \citeN{pipage}. In general, such relaxations preserve monotonicity which, as discussed, is required in mechanism design.
+This is because this extension can yield approximation guarantees for a wide class of combinatorial problems through \emph{pipage rounding}, a technique by \citeN{pipage}. In general, such relaxations preserve monotonicity which, as discussed, is required in mechanism design.
Formally, let $P_\mathcal{N}^\lambda$ be a probability distribution over $\mathcal{N}$ parametrized by $\lambda\in [0,1]^n$, where a set $S\subseteq \mathcal{N}$ sampled from $P_\mathcal{N}^\lambda$ is constructed as follows: each $i\in \mathcal{N}$ is selected to be in $S$ independently with probability $\lambda_i$, \emph{i.e.},
%\begin{displaymath}
@@ -82,16 +82,16 @@ Armed with this result, we subsequently use pipage rounding to show that any $\
$\bar{\lambda}\in \dom_c$ such that (a) $F(\lambda)\leq F(\bar{\lambda})$, and (b) at most one of the
coordinates of $\bar{\lambda}$ is fractional. %, that is, lies in $(0,1)$ and:
\end{lemma}
-The proof of this lemma is in \cite{arxiv}, and follows the main steps of the pipage rounding method of \citeN{pipage}. % this rounding property is referred to in the literature as \emph{cross-convexity} (see, \emph{e.g.}, \cite{dughmi}), or $\varepsilon$-convexity by \citeN{pipage}.
+The proof, also in \cite{arxiv}, follows the main steps of the pipage rounding method of \citeN{pipage}. % this rounding property is referred to in the literature as \emph{cross-convexity} (see, \emph{e.g.}, \cite{dughmi}), or $\varepsilon$-convexity by \citeN{pipage}.
Together, Lemma~\ref{lemma:relaxation-ratio} and Lemma~\ref{lemma:rounding} imply that $OPT$, the optimal value of \EDP, can be approximated by solving the following convex optimization problem:
\begin{align}\tag{$P_c$}\label{eq:primal}
\text{Maximize:} \quad L(\lambda)\quad \text{subject to:} \quad\lambda \in \dom_c
\end{align}
-In particular, for $L_c^*\defeq \max_{\lambda\in \dom_c} L(\lambda)$, the following holds:
+In particular, for $L_c^*\defeq \max_{\lambda\in \dom_c} L(\lambda)$, the following holds \cite{arxiv}:
\begin{proposition}\label{prop:relaxation}
$OPT\leq L^*_c \leq 2 OPT + 2\max_{i\in\mathcal{N}}V(i)$.
\end{proposition}
-The proof of this proposition can be found in \cite{arxiv}. As we discuss in the next section, $L^*_c$ can be computed by a poly-time algorithm at arbitrary accuracy. However, the outcome of this computation may not necessarily be monotone; we address this by converting this poly-time estimator of $L^*_c$ to one that is ``almost'' monotone.%The optimization program \eqref{eq:non-strategic} extends naturally to such
+As we discuss in the next section, $L^*_c$ can be computed by a poly-time algorithm at arbitrary accuracy. However, the outcome of this computation may not necessarily be monotone; we address this by converting this poly-time estimator of $L^*_c$ to one that is ``almost'' monotone.%The optimization program \eqref{eq:non-strategic} extends naturally to such
%a relaxation. We define:
%\begin{equation}\tag{$P_c$}\label{eq:primal}
% L^*_c \defeq \max_{\lambda\in[0, 1]^{n}}
@@ -108,7 +108,7 @@ The proof of this proposition can be found in \cite{arxiv}. As we discuss in the
%proposition which is also proved in the Appendix.
\subsection{Polynomial-Time, Almost-Monotone Approximation}\label{sec:monotonicity}
- The $\log\det$ objective function of \eqref{eq:primal} is strictly concave and \emph{self-concordant} \cite{boyd2004convex}. The maximization of a concave, self-concordant function subject to a set of linear constraints can be performed through the \emph{barrier method} (see, \emph{e.g.}, \cite{boyd2004convex} Section 11.5.5 for general self-concordant optimization as well as \cite{vandenberghe1998determinant} for a detailed treatment of the $\log\det$ objective). The performance of the barrier method is summarized in our case by the following lemma:
+The $\log\det$ objective function of \eqref{eq:primal} is strictly concave and \emph{self-concordant}~\cite{boyd2004convex}. The maximization of a concave, self-concordant function subject to a set of linear constraints can be performed through the \emph{barrier method} (see, \emph{e.g.},~\cite{boyd2004convex} Section 11.5.5 for general self-concordant optimization as well as \cite{vandenberghe1998determinant} for a detailed treatment of the $\log\det$ objective). The performance of the barrier method is summarized in our case by the following lemma:
\begin{lemma}[\citeN{boyd2004convex}]\label{lemma:barrier}
For any $\varepsilon>0$, the barrier method computes an
approximation $\hat{L}^*_c$ that is $\varepsilon$-accurate, \emph{i.e.}, it satisfies $|\hat L^*_c- L^*_c|\leq \varepsilon$, in time $O\left(poly(n,d,\log\log\varepsilon^{-1})\right)$. The same guarantees apply when maximizing $L$ subject to an arbitrary set of $O(n)$ linear constraints.
@@ -136,6 +136,8 @@ Note that $\dom_c=\dom_{c,0}.$ Consider the following perturbed problem:
\end{split}
\end{align}
+The restricted set $\dom_{c,\alpha}$ ensures that the partial derivatives of the optimal solution to $P_{c,\alpha}$ with respect to the costs are bounded from below. This implies that an approximate solution to $P_{c,\alpha}$ given by the barrier method is $\delta$-decreasing with respect to the costs. On the other hand, by taking $\alpha$ small enough, we ensure that the approximate solution to $P_{c,\alpha}$ is still an $\epsilon$-accurate approximation of $L_c^*$. This methodology is summarized in the following proposition whose proof can be found in \cite{arxiv}.
+
\begin{proposition}\label{prop:monotonicity}
For any $\delta\in(0,1]$ and any $\varepsilon\in(0,1]$, using the barrier
method to solve \eqref{eq:perturbed-primal} for $\alpha\defeq\varepsilon