summaryrefslogtreecommitdiffstats
path: root/approximation.tex
diff options
context:
space:
mode:
Diffstat (limited to 'approximation.tex')
-rw-r--r--approximation.tex123
1 files changed, 73 insertions, 50 deletions
diff --git a/approximation.tex b/approximation.tex
index ccd063a..2f73a6b 100644
--- a/approximation.tex
+++ b/approximation.tex
@@ -50,7 +50,7 @@ expectation of $V$ under the distribution $P_\mathcal{N}^\lambda$:
\end{equation}
Function $F$ is an extension of $V$ to the domain $[0,1]^n$, as it agrees with $V$ at integer inputs: $F(\id_S) = V(S)$ for all
$S\subseteq\mathcal{N}$, where $\id_S$ denotes the indicator vector of $S$. %\citeN{pipage} have shown how to use this extension to obtain approximation guarantees for an interesting class of optimization problems through the \emph{pipage rounding} framework, which has been successfully applied in \citeN{chen, singer-influence}.
-Unfortunately, for $V$ the value function given by \eqref{modified} that we study here, the
+For $V$ the value function given by \eqref{modified} that we study here, the
multi-linear extension \eqref{eq:multi-linear} cannot be computed---let alone maximized---in
polynomial time. Hence, we introduce a new extension $L:[0,1]^n\to\reals$:
\begin{equation}\label{eq:our-relaxation}
@@ -91,9 +91,7 @@ In particular, for $L_c^*\defeq \max_{\lambda\in \dom_c} L(\lambda)$ the optimal
\begin{proposition}\label{prop:relaxation}
$OPT\leq L^*_c \leq 2 OPT + 2\max_{i\in\mathcal{N}}V(i)$.
\end{proposition}
-The proof of this proposition can be found in Appendix~\ref{proofofproprelaxation}. Clearly, $L^*_c$ is monotone in $c$: $L^*_c\geq L^*_{c'}$ for any two $c,c'\in \reals_+^n$ s.t.~$c\leq c'$, coordinate-wise.
-
-%The optimization program \eqref{eq:non-strategic} extends naturally to such
+The proof of this proposition can be found in Appendix~\ref{proofofproprelaxation}. As we discuss in the next section, $L^*_c$ can be computed a polynomial time algorithm at arbitrary accuracy; however, the outcome of this computation may not necessarily be monotone: we will address this by converting this poly-time estimator of $L^*_c$ to one that is ``almost'' monotone.%The optimization program \eqref{eq:non-strategic} extends naturally to such
%a relaxation. We define:
%\begin{equation}\tag{$P_c$}\label{eq:primal}
% L^*_c \defeq \max_{\lambda\in[0, 1]^{n}}
@@ -109,69 +107,94 @@ The proof of this proposition can be found in Appendix~\ref{proofofproprelaxatio
%$OPT$ through rounding. Together, these properties give the following
%proposition which is also proved in the Appendix.
-\subsection{Solving a Convex Problem Monotonously}\label{sec:monotonicity}
+\subsection{Polynomial-Time, Almost-Monotonous Approximation}\label{sec:monotonicity}
+ The $\log\det$ objective function of \eqref{eq:primal} is known to be concave and \emph{self-concordant} \cite{boyd2004convex}. The maximization of a concave, self-concordant function subject to a set of linear constraints can be performed through the \emph{barrier method} (see, \emph{e.g.}, \cite{boyd2004convex} Section 11.5.5 for general self-concordant optimization as well as \cite{vandenberghe1998determinant} for a detailed treatise of the $\log\det$ case). The performance of the barrier method is summarized in our case by the following lemma:
+\begin{lemma}[\citeN{boyd2004convex}]\label{lemma:barrier}
+For any $\varepsilon>0$, the barrier method computes an
+approximation $\hat{L}^*_c$ that is $\varepsilon$-accurate, \emph{i.e.}, it satisfies $|\hat L^*_c- L^*_c|\leq \varepsilon$, in time $O\left(poly(n,d,\log\log\varepsilon^{-1})\right)$.
+\end{lemma}
-Note, that the feasible set in Problem~\eqref{eq:primal} increases (for the
-inclusion) when the cost decreases. As a consequence, $c\mapsto L^*_c$ is
-non-increasing.
+ Clearly, the optimal value $L^*_c$ of \eqref{eq:primal} is monotone in $c$. Formally, for any two $c,c'\in \reals_+^n$ s.t.~$c\leq c'$ coordinate-wise, $\dom_{c'}\subseteq \dom_c$ and thus $L^*_c\geq L^*_{c'}$. Hence, the map $c\mapsto L^*_c$ is non-increasing. Unfortunately, the same is not true for the output $\hat{L}_c^*$ of the barrier method: there is no guarantee that the $\epsilon$-accurate approximation $\hat{L}^*_c$ will exhibit any kind of monotonicity.
-Furthermore, \eqref{eq:primal} being a convex optimization problem, using
-standard convex optimization algorithms (Lemma~\ref{lemma:barrier} gives
-a formal statement for our specific problem) we can compute
-a $\varepsilon$-accurate approximation of its optimal value as defined below.
+Nevertheless, it is possible to use the barrier method to construct an approximation of $L^*_{c}$ that is ``almost'' monotone. More specifically, given $\delta>0$, we will say that $f:\reals^n\to\reals$ is
+\emph{$\delta$-decreasing} if
+\begin{equation}\label{eq:dd}
+ f(x) \geq f(x+\mu e_i), \qquad
+ \text{for all}~i\in \mathcal{N},x\in\reals^n, \mu\geq\delta,
+\end{equation}
+where $e_i$ is the $i$-th canonical basis vector of $\reals^n$.
+In other words, $f$ is $\delta$-decreasing if increasing any coordinate by $\delta$ or more at input $x$ ensures that the input will be at most $f(x)$.
-\begin{definition}
-$a$ is an $\varepsilon$-accurate approximation of $b$ iff $|a-b|\leq \varepsilon$.
-\end{definition}
+Our next technical lemma establishes that, using the barrier method, it is possible to construct an algorithm that computes $L^*_c$ at arbitrary accuracy in polynomial time \emph{and} is $\delta$-monotone. We achieve this by restricting the optimization over a subset of $\dom_c$ at which the concave relaxation $L$ is ``sufficiently'' concave. Formally, for $\alpha\geq 0$ let $$\textstyle\dom_{c,\alpha} \defeq \{\lambda \in [\alpha,1]^n: \sum_{i\in \mathcal{N}}c_i\lambda_i \leq B\}\subseteq \dom_c . $$
+Note that $\dom_c=\dom_{c,0}.$ Consider the following perturbation of the concave relaxation \eqref{eq:primal}:
+\begin{align}\tag{$P_{c,\alpha}$}\label{eq:perturbed-primal}
+\begin{split} \text{Maximize:} &\qquad L(\lambda)\\
+\text{subject to:} & \qquad\lambda \in \dom_{c,\alpha}
+\end{split}
+\end{align}
-Note however that an $\varepsilon$-accurate approximation of a non-increasing
-function is not in general non-increasing itself. The goal of this section is
-to approximate $L^*_c$ while preserving monotonicity. The estimator we
-construct has a weaker form of monotonicity that we call
-\emph{$\delta$-monotonicity}.
+%Note, that the feasible set in Problem~\eqref{eq:primal} increases (for the
+%inclusion) when the cost decreases.
+%non-increasing.
-\begin{definition}
-Let $f$ be a function from $\reals^n$ to $\reals$, we say that $f$ is
-\emph{$\delta$-increasing along the $i$-th coordinate} iff:
-\begin{equation}\label{eq:dd}
- \forall x\in\reals^n,\;
- \forall \mu\geq\delta,\;
- f(x+\mu e_i)\geq f(x)
-\end{equation}
-where $e_i$ is the $i$-th canonical basis vector of $\reals^n$. By extension,
-$f$ is $\delta$-increasing iff it is $\delta$-increasing along all coordinates.
+%Furthermore, \eqref{eq:primal} being a convex optimization problem, using
+%standard convex optimization algorithms (Lemma~\ref{lemma:barrier} gives
+%a formal statement for our specific problem) we can compute
+%a $\varepsilon$-accurate approximation of its optimal value as defined below.
-We define \emph{$\delta$-decreasing} functions by reversing the inequality in
-\eqref{eq:dd}.
-\end{definition}
+%\begin{definition}
+%$a$ is an $\varepsilon$-accurate approximation of $b$ iff $|a-b|\leq \varepsilon$.
+%\end{definition}
-We consider a perturbation of \eqref{eq:primal} by introducing:
-\begin{equation}\tag{$P_{c, \alpha}$}\label{eq:perturbed-primal}
- L^*_c(\alpha) \defeq \max_{\lambda\in[\alpha, 1]^{n}}
- \left\{L(\lambda) \Big| \sum_{i=1}^{n} \lambda_i c_i
- \leq B\right\}
-\end{equation}
-Note that we have $L^*_c = L^*_c(0)$. We will also assume that
-$\alpha<\frac{1}{nB}$ so that \eqref{eq:perturbed-primal} has at least one
-feasible point: $(\frac{1}{nB},\ldots,\frac{1}{nB})$. By computing
-an approximation of $L^*_c(\alpha)$ as in Algorithm~\ref{alg:monotone}, we
-obtain a $\delta$-decreasing approximation of $L^*_c$.
+%Note however that an $\varepsilon$-accurate approximation of a non-increasing
+%function is not in general non-increasing itself. The goal of this section is
+%to approximate $L^*_c$ while preserving monotonicity. The estimator we
+%construct has a weaker form of monotonicity that we call
+%\emph{$\delta$-monotonicity}.
+
+%\begin{definition}
+%Let $f$ be a function from $\reals^n$ to $\reals$, we say that $f$ is
+%\emph{$\delta$-increasing along the $i$-th coordinate} iff:
+%\begin{equation}\label{eq:dd}
+% \forall x\in\reals^n,\;
+% \forall \mu\geq\delta,\;
+% f(x+\mu e_i)\geq f(x)
+%\end{equation}
+%where $e_i$ is the $i$-th canonical basis vector of $\reals^n$. By extension,
+%$f$ is $\delta$-increasing iff it is $\delta$-increasing along all coordinates.
+
+%We define \emph{$\delta$-decreasing} functions by reversing the inequality in
+%\eqref{eq:dd}.
+%\end{definition}
+
+%We consider a perturbation of \eqref{eq:primal} by introducing:
+%\begin{equation}\tag{$P_{c, \alpha}$}\label{eq:perturbed-primal}
+% L^*_{c,\alpha} \defeq \max_{\lambda\in[\alpha, 1]^{n}}
+% \left\{L(\lambda) \Big| \sum_{i=1}^{n} \lambda_i c_i
+% \leq B\right\}
+%\end{equation}
+%Note that we have $L^*_c = L^*_c(0)$. We will also assume that
+%$\alpha<\frac{1}{nB}$ so that \eqref{eq:perturbed-primal} has at least one
+%feasible point: $(\frac{1}{nB},\ldots,\frac{1}{nB})$. By computing
+%an approximation of $L^*_{c,\alpha}$ as in Algorithm~\ref{alg:monotone}, we
+%obtain a $\delta$-decreasing approximation of $L^*_c$.
\begin{algorithm}[t]
\caption{}\label{alg:monotone}
\begin{algorithmic}[1]
- \State $\alpha \gets \varepsilon B^{-1}(\delta+n^2)^{-1} $
-
- \State Compute a $\frac{1}{2^{n+1}}\alpha\delta b$-accurate approximation of
- $L^*_c(\alpha)$
+ \Require{ $B\in \reals_+$,$c\in\reals^n_+$, $\delta\in (0,1]$, $\epsilon\in (0,1]$ }
+ \State $\alpha \gets \min\left(\varepsilon B^{-1}(\delta+n^2)^{-1},\frac{1}{nB}\right) $
+ \State Use the barrier method to solve \eqref{eq:perturbed-primal} with accuracy $\varepsilon'=\frac{1}{2^{n+1}}\alpha\delta b$; denote the output by $\hat{L}^*_{c,\alpha}$
+ \State \textbf{return} $\hat{L}^*_{c,\alpha}$
\end{algorithmic}
\end{algorithm}
+Our construction of a $\delta$-decreasing, $\varepsilon$-accurate approximator of $\hat{L}_c^*$ proceeds as follows: first, it computes an appropriately selected lower bound $\alpha$; using this bound, it proceeds by solving the perturbed problem \eqref{eq:perturbed-primal} using the barrier method, also at an appropriately selected accuracy $\varepsilon'$. The corresponding output is returned as an approximation of $L^*_c$. A summary of algorithm and the specific choices of $\alpha$ and $\varepsilon'$ are given in Algorithm~\ref{alg:monotone}. The following proposition, which is proved in Appendix~\ref{proofofpropmonotonicity}, establishes that this algorithm has the desirable properties:
\begin{proposition}\label{prop:monotonicity}
For any $\delta\in(0,1]$ and any $\varepsilon\in(0,1]$,
Algorithm~\ref{alg:monotone} computes a $\delta$-decreasing,
$\varepsilon$-accurate approximation of $L^*_c$. The running time of the
- algorithm is $O\big(poly(n, d, \log\log\frac{1}{b\varepsilon\delta})\big)$
+ algorithm is $O\big(poly(n, d, \log\log\frac{1}{b\varepsilon\delta})\big)$. \note{THIBAUT, IS THERE A BUDGET $B$ IN THE DENOM OF THE LOGLOG TERM?}
\end{proposition}
-
+We note that the the execution of the barrier method on the restricted set $\dom_{c,\alpha}$ is necessary. The algorithm's output when executed over the entire domain may not necessarily be $\delta$-decreasing, even when the approximation accuracy is small. This is because when the optimal $\lambda\in \dom_c$ lies at the boundary, certain costs can be saturated: increasing them has no effect on the objective. Forcing the optimization to happen ``off'' the boundary ensures that this does not occur, while taking $\alpha$ to be small ensures that this perturbation does not cost much in terms of approximation accuracy.