diff options
| author | Stratis Ioannidis <stratis@stratis-Latitude-E6320.(none)> | 2013-07-04 11:58:04 -0700 |
|---|---|---|
| committer | Stratis Ioannidis <stratis@stratis-Latitude-E6320.(none)> | 2013-07-04 11:58:04 -0700 |
| commit | a53c28eebf8926d02a835dc55a903b6528c8e784 (patch) | |
| tree | 75a40cb6f0e703b5f0a886c2228aaf285c47dce7 /approximation.tex | |
| parent | d8618b13eacfb5578e023bd4070dabe958548f2b (diff) | |
| download | recommendation-a53c28eebf8926d02a835dc55a903b6528c8e784.tar.gz | |
added lower bound in proof, some wording changed
Diffstat (limited to 'approximation.tex')
| -rw-r--r-- | approximation.tex | 20 |
1 files changed, 10 insertions, 10 deletions
diff --git a/approximation.tex b/approximation.tex index cc861e8..de1fecd 100644 --- a/approximation.tex +++ b/approximation.tex @@ -46,17 +46,17 @@ expectation of $V$ under the distribution $P_\mathcal{N}^\lambda$: F(\lambda) \defeq \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\big[V(S)\big] % = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S) -= \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[ \log\det\left( I_d + \sum_{i\in S} x_i\T{x_i}\right) \right]. += \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[ \log\det\left( I_d + \sum_{i\in S} x_i\T{x_i}\right) \right],\qquad \lambda\in[0,1]^n. \end{equation} Function $F$ is an extension of $V$ to the domain $[0,1]^n$, as it equals $V$ on integer inputs: $F(\id_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where $\id_S$ denotes the indicator vector of $S$. %\citeN{pipage} have shown how to use this extension to obtain approximation guarantees for an interesting class of optimization problems through the \emph{pipage rounding} framework, which has been successfully applied in \citeN{chen, singer-influence}. Contrary to problems such as \textsc{Knapsack}, the multi-linear extension \eqref{eq:multi-linear} cannot be optimized in -polynomial time for the value function $V$ that we study here, given by \eqref{modified}. Hence, we introduce a new extension $L:[0,1]^n\to\reals$: +polynomial time for the value function $V$ that we study here, given by \eqref{modified}. Hence, we introduce a new extension $L:[0,1]^n\to\reals$ s.t.~ \begin{equation}\label{eq:our-relaxation} -\forall\,\lambda\in[0,1]^n,\quad L(\lambda) \defeq +\quad L(\lambda) \defeq \log\det\left(I_d + \sum_{i\in\mathcal{N}} \lambda_i x_i\T{x_i}\right)= -\log\det\left(\mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\bigg[I_d + \sum_{i\in S} x_i\T{x_i} \bigg]\right). +\log\det\left(\mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\bigg[I_d + \sum_{i\in S} x_i\T{x_i} \bigg]\right), \qquad \lambda\in[0,1]^n. \end{equation} Note that $L$ also extends $V$, and follows naturally from the multi-linear extension by swapping the expectation and $\log \det$ in \eqref{eq:multi-linear}. Crucially, it is \emph{strictly concave} on $[0,1]^n$, a fact that we exploit in the next section to maximize $L$ subject to the budget constraint in polynomial time. @@ -91,7 +91,7 @@ In particular, for $L_c^*\defeq \max_{\lambda\in \dom_c} L(\lambda)$ the optimal \begin{proposition}\label{prop:relaxation} $OPT\leq L^*_c \leq 2 OPT + 2\max_{i\in\mathcal{N}}V(i)$. \end{proposition} -The proof of this proposition can be found in Appendix~\ref{proofofproprelaxation}. As we discuss in the next section, $L^*_c$ can be computed a polynomial time algorithm at arbitrary accuracy; however, the outcome of this computation may not necessarily be monotone: we will address this by converting this poly-time estimator of $L^*_c$ to one that is ``almost'' monotone.%The optimization program \eqref{eq:non-strategic} extends naturally to such +The proof of this proposition can be found in Appendix~\ref{proofofproprelaxation}. As we discuss in the next section, $L^*_c$ can be computed a polynomial time algorithm at arbitrary accuracy. However, the outcome of this computation may not necessarily be monotone; we address this by converting this poly-time estimator of $L^*_c$ to one that is ``almost'' monotone.%The optimization program \eqref{eq:non-strategic} extends naturally to such %a relaxation. We define: %\begin{equation}\tag{$P_c$}\label{eq:primal} % L^*_c \defeq \max_{\lambda\in[0, 1]^{n}} @@ -107,14 +107,14 @@ The proof of this proposition can be found in Appendix~\ref{proofofproprelaxatio %$OPT$ through rounding. Together, these properties give the following %proposition which is also proved in the Appendix. -\subsection{Polynomial-Time, Almost-Monotonous Approximation}\label{sec:monotonicity} - The $\log\det$ objective function of \eqref{eq:primal} is strictly concave and \emph{self-concordant} \cite{boyd2004convex}. The maximization of a concave, self-concordant function subject to a set of linear constraints can be performed through the \emph{barrier method} (see, \emph{e.g.}, \cite{boyd2004convex} Section 11.5.5 for general self-concordant optimization as well as \cite{vandenberghe1998determinant} for a detailed treatise of the $\log\det$ objective). The performance of the barrier method is summarized in our case by the following lemma: +\subsection{Polynomial-Time, Almost-Monotone Approximation}\label{sec:monotonicity} + The $\log\det$ objective function of \eqref{eq:primal} is strictly concave and \emph{self-concordant} \cite{boyd2004convex}. The maximization of a concave, self-concordant function subject to a set of linear constraints can be performed through the \emph{barrier method} (see, \emph{e.g.}, \cite{boyd2004convex} Section 11.5.5 for general self-concordant optimization as well as \cite{vandenberghe1998determinant} for a detailed treatment of the $\log\det$ objective). The performance of the barrier method is summarized in our case by the following lemma: \begin{lemma}[\citeN{boyd2004convex}]\label{lemma:barrier} For any $\varepsilon>0$, the barrier method computes an approximation $\hat{L}^*_c$ that is $\varepsilon$-accurate, \emph{i.e.}, it satisfies $|\hat L^*_c- L^*_c|\leq \varepsilon$, in time $O\left(poly(n,d,\log\log\varepsilon^{-1})\right)$. \end{lemma} - Clearly, the optimal value $L^*_c$ of \eqref{eq:primal} is monotone in $c$. Formally, for any two $c,c'\in \reals_+^n$ s.t.~$c\leq c'$ coordinate-wise, $\dom_{c'}\subseteq \dom_c$ and thus $L^*_c\geq L^*_{c'}$. Hence, the map $c\mapsto L^*_c$ is non-increasing. Unfortunately, the same is not true for the output $\hat{L}_c^*$ of the barrier method: there is no guarantee that the $\epsilon$-accurate approximation $\hat{L}^*_c$ will exhibit any kind of monotonicity. + Clearly, the optimal value $L^*_c$ of \eqref{eq:primal} is monotone in $c$. Formally, for any two $c,c'\in \reals_+^n$ s.t.~$c\leq c'$ coordinate-wise, $\dom_{c'}\subseteq \dom_c$ and thus $L^*_c\geq L^*_{c'}$. Hence, the map $c\mapsto L^*_c$ is non-increasing. Unfortunately, the same is not true for the output $\hat{L}_c^*$ of the barrier method: there is no guarantee that the $\epsilon$-accurate approximation $\hat{L}^*_c$ exhibits any kind of monotonicity. Nevertheless, it is possible to use the barrier method to construct an approximation of $L^*_{c}$ that is ``almost'' monotone. More specifically, given $\delta>0$, we will say that $f:\reals^n\to\reals$ is \emph{$\delta$-decreasing} if @@ -125,7 +125,7 @@ Nevertheless, it is possible to use the barrier method to construct an approxima where $e_i$ is the $i$-th canonical basis vector of $\reals^n$. In other words, $f$ is $\delta$-decreasing if increasing any coordinate by $\delta$ or more at input $x$ ensures that the output will be at most $f(x)$. -Our next technical result establishes that, using the barrier method, it is possible to construct an algorithm that computes $L^*_c$ at arbitrary accuracy in polynomial time \emph{and} is $\delta$-monotone. We achieve this by restricting the optimization over a subset of $\dom_c$ at which the concave relaxation $L$ is ``sufficiently'' concave. Formally, for $\alpha\geq 0$ let $$\textstyle\dom_{c,\alpha} \defeq \{\lambda \in [\alpha,1]^n: \sum_{i\in \mathcal{N}}c_i\lambda_i \leq B\}\subseteq \dom_c . $$ +Our next technical result establishes that, using the barrier method, it is possible to construct an algorithm that computes $L^*_c$ at arbitrary accuracy in polynomial time \emph{and} is $\delta$-decreasing. We achieve this by restricting the optimization over a subset of $\dom_c$ at which the concave relaxation $L$ is ``sufficiently'' concave. Formally, for $\alpha\geq 0$ let $$\textstyle\dom_{c,\alpha} \defeq \{\lambda \in [\alpha,1]^n: \sum_{i\in \mathcal{N}}c_i\lambda_i \leq B\}\subseteq \dom_c . $$ Note that $\dom_c=\dom_{c,0}.$ Consider the following perturbation of the concave relaxation \eqref{eq:primal}: \begin{align}\tag{$P_{c,\alpha}$}\label{eq:perturbed-primal} \begin{split} \text{Maximize:} &\qquad L(\lambda)\\ @@ -189,7 +189,7 @@ Note that $\dom_c=\dom_{c,0}.$ Consider the following perturbation of the concav \end{algorithmic} \end{algorithm} -Our construction of a $\delta$-decreasing, $\varepsilon$-accurate approximator of $\hat{L}_c^*$ proceeds as follows: first, it computes an appropriately selected lower bound $\alpha$; using this bound, it proceeds by solving the perturbed problem \eqref{eq:perturbed-primal} using the barrier method, also at an appropriately selected accuracy $\varepsilon'$. The corresponding output is returned as an approximation of $L^*_c$. A summary of algorithm and the specific choices of $\alpha$ and $\varepsilon'$ are given in Algorithm~\ref{alg:monotone}. The following proposition, which is proved in Appendix~\ref{proofofpropmonotonicity}, establishes that this algorithm has the desirable properties: +Our construction of a $\delta$-decreasing, $\varepsilon$-accurate approximator of $\hat{L}_c^*$ proceeds as follows: first, it computes an appropriately selected lower bound $\alpha$; using this bound, it proceeds by solving the perturbed problem \eqref{eq:perturbed-primal} using the barrier method, also at an appropriately selected accuracy $\varepsilon'$, obtaining thusly a $\varepsilon'$-accurate approximation of $L^*_{c,\alpha}\defeq \max_{\lambda\in \dom_{c,\alpha}} L(\lambda)$ . The corresponding output is returned as an approximation of $L^*_c$. A summary of algorithm and the specific choices of $\alpha$ and $\varepsilon'$ are given in Algorithm~\ref{alg:monotone}. The following proposition, which is proved in Appendix~\ref{proofofpropmonotonicity}, establishes that this algorithm has the desirable properties: \begin{proposition}\label{prop:monotonicity} For any $\delta\in(0,1]$ and any $\varepsilon\in(0,1]$, Algorithm~\ref{alg:monotone} computes a $\delta$-decreasing, |
