summaryrefslogtreecommitdiffstats
path: root/approximation.tex
diff options
context:
space:
mode:
Diffstat (limited to 'approximation.tex')
-rw-r--r--approximation.tex18
1 files changed, 9 insertions, 9 deletions
diff --git a/approximation.tex b/approximation.tex
index aed79a0..6cc9e49 100644
--- a/approximation.tex
+++ b/approximation.tex
@@ -53,7 +53,7 @@ Function $F$ is an extension of $V$ to the domain $[0,1]^n$, as it equals $V$ o
$S\subseteq\mathcal{N}$, where $\id_S$ denotes the indicator vector of $S$. %\citeN{pipage} have shown how to use this extension to obtain approximation guarantees for an interesting class of optimization problems through the \emph{pipage rounding} framework, which has been successfully applied in \citeN{chen, singer-influence}.
Contrary to problems such as \textsc{Knapsack}, the
multi-linear extension \eqref{eq:multi-linear} cannot be optimized in
-polynomial time for the value function $V$ that we study here, given by \eqref{modified}. Hence, we introduce a new extension $L:[0,1]^n\to\reals$ s.t.~
+polynomial time for the value function $V$ we study here, given by \eqref{modified}. Hence, we introduce an extension $L:[0,1]^n\to\reals$ s.t.~
\begin{equation}\label{eq:our-relaxation}
\quad L(\lambda) \defeq
\log\det\left(I_d + \sum_{i\in\mathcal{N}} \lambda_i x_i\T{x_i}\right)=
@@ -112,17 +112,17 @@ The proof of this proposition can be found in Appendix~\ref{proofofproprelaxatio
The $\log\det$ objective function of \eqref{eq:primal} is strictly concave and \emph{self-concordant} \cite{boyd2004convex}. The maximization of a concave, self-concordant function subject to a set of linear constraints can be performed through the \emph{barrier method} (see, \emph{e.g.}, \cite{boyd2004convex} Section 11.5.5 for general self-concordant optimization as well as \cite{vandenberghe1998determinant} for a detailed treatment of the $\log\det$ objective). The performance of the barrier method is summarized in our case by the following lemma:
\begin{lemma}[\citeN{boyd2004convex}]\label{lemma:barrier}
For any $\varepsilon>0$, the barrier method computes an
-approximation $\hat{L}^*_c$ that is $\varepsilon$-accurate, \emph{i.e.}, it satisfies $|\hat L^*_c- L^*_c|\leq \varepsilon$, in time $O\left(poly(n,d,\log\log\varepsilon^{-1})\right)$. The same guarantees apply for maximizing $L$ subject to an arbitrary set of $O(n)$ linear constraints.
+approximation $\hat{L}^*_c$ that is $\varepsilon$-accurate, \emph{i.e.}, it satisfies $|\hat L^*_c- L^*_c|\leq \varepsilon$, in time $O\left(poly(n,d,\log\log\varepsilon^{-1})\right)$. The same guarantees apply when maximizing $L$ subject to an arbitrary set of $O(n)$ linear constraints.
\end{lemma}
Clearly, the optimal value $L^*_c$ of \eqref{eq:primal} is monotone in $c$. Formally, for any two $c,c'\in \reals_+^n$ s.t.~$c\leq c'$ coordinate-wise, $\dom_{c'}\subseteq \dom_c$ and thus $L^*_c\geq L^*_{c'}$. Hence, the map $c\mapsto L^*_c$ is non-increasing. Unfortunately, the same is not true for the output $\hat{L}_c^*$ of the barrier method: there is no guarantee that the $\epsilon$-accurate approximation $\hat{L}^*_c$ exhibits any kind of monotonicity.
-Nevertheless, we prove that it is possible to use the barrier method to construct an approximation of $L^*_{c}$ that is ``almost'' monotone. More specifically, given $\delta>0$, we will say that $f:\reals^n\to\reals$ is
+Nevertheless, we prove that it is possible to use the barrier method to construct an approximation of $L^*_{c}$ that is ``almost'' monotone. More specifically, given $\delta>0$, we say that $f:\reals^n\to\reals$ is
\emph{$\delta$-decreasing} if
-\begin{equation}\label{eq:dd}
- f(x) \geq f(x+\mu e_i), \qquad
- \text{for all}~i\in \mathcal{N},x\in\reals^n, \mu\geq\delta,
-\end{equation}
+%\begin{equation}\label{eq:dd}
+ $ f(x) \geq f(x+\mu e_i)$,
+ for all $i\in \mathcal{N},x\in\reals^n, \mu\geq\delta,$
+%\end{equation}
where $e_i$ is the $i$-th canonical basis vector of $\reals^n$.
In other words, $f$ is $\delta$-decreasing if increasing any coordinate by $\delta$ or more at input $x$ ensures that the output will be at most $f(x)$.
@@ -191,12 +191,12 @@ Note that $\dom_c=\dom_{c,0}.$ Consider the following perturbation of the concav
\end{algorithmic}
\end{algorithm}
-Our construction of a $\delta$-decreasing, $\varepsilon$-accurate approximator of $\hat{L}_c^*$ proceeds as follows: first, it computes an appropriately selected lower bound $\alpha$; using this bound, it proceeds by solving the perturbed problem \eqref{eq:perturbed-primal} using the barrier method, also at an appropriately selected accuracy $\varepsilon'$, obtaining thusly a $\varepsilon'$-accurate approximation of $L^*_{c,\alpha}\defeq \max_{\lambda\in \dom_{c,\alpha}} L(\lambda)$ . The corresponding output is returned as an approximation of $L^*_c$. A summary of the algorithm and the specific choices of $\alpha$ and $\varepsilon'$ are given in Algorithm~\ref{alg:monotone}. The following proposition, which is proved in Appendix~\ref{proofofpropmonotonicity}, establishes that this algorithm has both properties we desire:
+Our construction of a $\delta$-decreasing, $\varepsilon$-accurate approximator of $L_c^*$ proceeds as follows: first, it computes an appropriately selected lower bound $\alpha$; using this bound, it solves the perturbed problem \eqref{eq:perturbed-primal} using the barrier method, also at an appropriately selected accuracy $\varepsilon'$, obtaining thus a $\varepsilon'$-accurate approximation of $L^*_{c,\alpha}\defeq \max_{\lambda\in \dom_{c,\alpha}} L(\lambda)$ . The corresponding output is returned as an approximation of $L^*_c$. A summary of the algorithm and the specific choices of $\alpha$ and $\varepsilon'$ are given in Algorithm~\ref{alg:monotone}. The following proposition, which is proved in Appendix~\ref{proofofpropmonotonicity}, establishes that this algorithm has both properties we desire:
\begin{proposition}\label{prop:monotonicity}
For any $\delta\in(0,1]$ and any $\varepsilon\in(0,1]$,
Algorithm~\ref{alg:monotone} computes a $\delta$-decreasing,
$\varepsilon$-accurate approximation of $L^*_c$. The running time of the
algorithm is $O\big(poly(n, d, \log\log\frac{B}{b\varepsilon\delta})\big)$.
\end{proposition}
-We note that the the execution of the barrier method on the restricted set $\dom_{c,\alpha}$ is necessary. The algorithm's output when executed over the entire domain may not necessarily be $\delta$-decreasing, even when the approximation accuracy is small. This is because costs become saturated when the optimal $\lambda\in \dom_c$ lies at the boundary: increasing them has no effect on the objective. Forcing the optimization to happen ``off'' the boundary ensures that this does not occur, while taking $\alpha$ to be small ensures that this perturbation does not cost much in terms of approximation accuracy.
+We note that the execution of the barrier method on the restricted set $\dom_{c,\alpha}$ is necessary. The algorithm's output when executed over the entire domain may not necessarily be $\delta$-decreasing, even when the approximation accuracy is small. This is because costs become saturated when the optimal $\lambda\in \dom_c$ lies at the boundary: increasing them has no effect on the objective. Forcing the optimization to happen ``off'' the boundary ensures that this does not occur, while taking $\alpha$ to be small ensures that this perturbation does not cost much in terms of approximation accuracy.