diff options
Diffstat (limited to 'approximation.tex')
| -rwxr-xr-x | approximation.tex | 31 |
1 files changed, 22 insertions, 9 deletions
diff --git a/approximation.tex b/approximation.tex index dc39e5b..2747d5a 100755 --- a/approximation.tex +++ b/approximation.tex @@ -73,7 +73,7 @@ For all $\lambda\in[0,1]^{n},$ \,L(\lambda)\leq F(\lambda)\leq L(\lambda)$. \end{lemma} -The proof of this lemma can be found in Appendix~\ref{proofofrelaxation-ratio}. In short, exploiting the concavity of the $\log\det$ function over the set of positive semi-definite matrices, we first bound the ratio of all partial derivatives of $F$ and $L$. We then show that the bound on the ratio of the derivatives also implies a bound on the ratio $F/L$. +The proof of this lemma can be found in \cite{arxiv}. In short, exploiting the concavity of the $\log\det$ function over the set of positive semi-definite matrices, we first bound the ratio of all partial derivatives of $F$ and $L$. We then show that the bound on the ratio of the derivatives also implies a bound on the ratio $F/L$. Armed with this result, we subsequently use pipage rounding to show that any $\lambda$ that maximizes the multi-linear extension $F$ can be rounded to an ``almost'' integral solution. More specifically, given a set of costs $c\in \reals^n_+$, we say that a $\lambda\in [0,1]^n$ is feasible if it belongs to the set $\dom_c =\{\lambda \in [0,1]^n: \sum_{i\in \mathcal{N}} c_i\lambda_i\leq B\}$. Then, the following lemma holds: @@ -82,7 +82,7 @@ Armed with this result, we subsequently use pipage rounding to show that any $\ $\bar{\lambda}\in \dom_c$ such that (a) $F(\lambda)\leq F(\bar{\lambda})$, and (b) at most one of the coordinates of $\bar{\lambda}$ is fractional. %, that is, lies in $(0,1)$ and: \end{lemma} -The proof of this lemma is in Appendix \ref{proofoflemmarounding}, and follows the main steps of the pipage rounding method of \citeN{pipage}. % this rounding property is referred to in the literature as \emph{cross-convexity} (see, \emph{e.g.}, \cite{dughmi}), or $\varepsilon$-convexity by \citeN{pipage}. +The proof of this lemma is in \cite{arxiv}, and follows the main steps of the pipage rounding method of \citeN{pipage}. % this rounding property is referred to in the literature as \emph{cross-convexity} (see, \emph{e.g.}, \cite{dughmi}), or $\varepsilon$-convexity by \citeN{pipage}. Together, Lemma~\ref{lemma:relaxation-ratio} and Lemma~\ref{lemma:rounding} imply that $OPT$, the optimal value of \EDP, can be approximated by solving the following convex optimization problem: \begin{align}\tag{$P_c$}\label{eq:primal} \text{Maximize:} \quad L(\lambda)\quad \text{subject to:} \quad\lambda \in \dom_c @@ -91,7 +91,7 @@ In particular, for $L_c^*\defeq \max_{\lambda\in \dom_c} L(\lambda)$, the follow \begin{proposition}\label{prop:relaxation} $OPT\leq L^*_c \leq 2 OPT + 2\max_{i\in\mathcal{N}}V(i)$. \end{proposition} -The proof of this proposition can be found in Appendix~\ref{proofofproprelaxation}. As we discuss in the next section, $L^*_c$ can be computed by a poly-time algorithm at arbitrary accuracy. However, the outcome of this computation may not necessarily be monotone; we address this by converting this poly-time estimator of $L^*_c$ to one that is ``almost'' monotone.%The optimization program \eqref{eq:non-strategic} extends naturally to such +The proof of this proposition can be found in \cite{arxiv}. As we discuss in the next section, $L^*_c$ can be computed by a poly-time algorithm at arbitrary accuracy. However, the outcome of this computation may not necessarily be monotone; we address this by converting this poly-time estimator of $L^*_c$ to one that is ``almost'' monotone.%The optimization program \eqref{eq:non-strategic} extends naturally to such %a relaxation. We define: %\begin{equation}\tag{$P_c$}\label{eq:primal} % L^*_c \defeq \max_{\lambda\in[0, 1]^{n}} @@ -123,12 +123,25 @@ Nevertheless, we prove that it is possible to use the barrier method to construc for all $i\in \mathcal{N},x\in\reals^n, \mu\geq\delta,$ %\end{equation} where $e_i$ is the $i$-th canonical basis vector of $\reals^n$. -In other words, $f$ is $\delta$-decreasing if increasing any coordinate by $\delta$ or more at input $x$ ensures that the output will be at most $f(x)$. The following holds: +In other words, $f$ is $\delta$-decreasing if increasing any coordinate by $\delta$ or more at input $x$ ensures that the output will be at most $f(x)$. + +%Our next technical result establishes that, using the barrier method, it is possible to construct an algorithm that computes $L^*_c$ at arbitrary accuracy in polynomial time and is $\delta$-decreasing. + +We achieve this by restricting the optimization over a subset of $\dom_c$ at which the concave relaxation $L$ is ``sufficiently'' concave. Formally, for $\alpha\geq 0$ let +$$\textstyle\dom_{c,\alpha} \defeq \{\lambda \in [\alpha,1]^n: \sum_{i\in \mathcal{N}}c_i\lambda_i \leq B\}\subseteq \dom_c . $$ +Note that $\dom_c=\dom_{c,0}.$ Consider the following perturbation of the concave relaxation \eqref{eq:primal}: +\begin{align}\tag{$P_{c,\alpha}$}\label{eq:perturbed-primal} +\begin{split} \text{Maximize:} &\qquad L(\lambda)\\ +\text{subject to:} & \qquad\lambda \in \dom_{c,\alpha} +\end{split} +\end{align} \begin{proposition}\label{prop:monotonicity} - For any $\delta\in(0,1]$ and any $\varepsilon\in(0,1]$, - there exists an algorithm which computes a $\delta$-decreasing, - $\varepsilon$-accurate approximation of $L^*_c$. The running time of the - algorithm is $O\big(poly(n, d, \log\log\frac{B}{b\varepsilon\delta})\big)$. + For any $\delta\in(0,1]$ and any $\varepsilon\in(0,1]$, using the barrier + method to solve \eqref{eq:perturbed-primal} for $\alpha\defeq\varepsilon + (\delta/B+n^2)^{-1}$ with accuracy $\frac{1}{2^{n+1}B}\alpha\delta b$ + yields a $\delta$-decreasing, $\varepsilon$-accurate approximation of + $L^*_c$. The running time of the algorithm is $O\big(poly(n, d, + \log\log\frac{B}{b\varepsilon\delta})\big)$. \end{proposition} -The proof and the algorithm (Algorithm~\ref{alg:monotone}) are in Appendix~\ref{proofofpropmonotonicity}. +The proof and the formal description of the algorithm are in \cite{arxiv}. |
