diff options
| -rw-r--r-- | approximation.tex | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/approximation.tex b/approximation.tex index 10cf171..cc861e8 100644 --- a/approximation.tex +++ b/approximation.tex @@ -50,9 +50,9 @@ expectation of $V$ under the distribution $P_\mathcal{N}^\lambda$: \end{equation} Function $F$ is an extension of $V$ to the domain $[0,1]^n$, as it equals $V$ on integer inputs: $F(\id_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where $\id_S$ denotes the indicator vector of $S$. %\citeN{pipage} have shown how to use this extension to obtain approximation guarantees for an interesting class of optimization problems through the \emph{pipage rounding} framework, which has been successfully applied in \citeN{chen, singer-influence}. -For $V$ the value function given by \eqref{modified} that we study here, the -multi-linear extension \eqref{eq:multi-linear} cannot be computed---let alone maximized---in -polynomial time. Hence, we introduce a new extension $L:[0,1]^n\to\reals$: +Contrary to problems such as \textsc{Knapsack}, the +multi-linear extension \eqref{eq:multi-linear} cannot be optimized in +polynomial time for the value function $V$ that we study here, given by \eqref{modified}. Hence, we introduce a new extension $L:[0,1]^n\to\reals$: \begin{equation}\label{eq:our-relaxation} \forall\,\lambda\in[0,1]^n,\quad L(\lambda) \defeq \log\det\left(I_d + \sum_{i\in\mathcal{N}} \lambda_i x_i\T{x_i}\right)= @@ -194,7 +194,7 @@ Our construction of a $\delta$-decreasing, $\varepsilon$-accurate approximator o For any $\delta\in(0,1]$ and any $\varepsilon\in(0,1]$, Algorithm~\ref{alg:monotone} computes a $\delta$-decreasing, $\varepsilon$-accurate approximation of $L^*_c$. The running time of the - algorithm is $O\big(poly(n, d, \log\log\frac{1}{b\varepsilon\delta})\big)$. \note{THIBAUT, IS THERE A BUDGET $B$ IN THE DENOM OF THE LOGLOG TERM?} + algorithm is $O\big(poly(n, d, \log\log\frac{1}{b\varepsilon\delta})\big)$. \note{THIBAUT, IS THERE A BUDGET $B$ IN THE NUMERATOR OF THE LOGLOG TERM?} \end{proposition} -We note that the the execution of the barrier method on the restricted set $\dom_{c,\alpha}$ is necessary. The algorithm's output when executed over the entire domain may not necessarily be $\delta$-decreasing, even when the approximation accuracy is small. This is because when the optimal $\lambda\in \dom_c$ lies at the boundary, certain costs can be saturated: increasing them has no effect on the objective. Forcing the optimization to happen ``off'' the boundary ensures that this does not occur, while taking $\alpha$ to be small ensures that this perturbation does not cost much in terms of approximation accuracy. +We note that the the execution of the barrier method on the restricted set $\dom_{c,\alpha}$ is necessary. The algorithm's output when executed over the entire domain may not necessarily be $\delta$-decreasing, even when the approximation accuracy is small. This is because costs become saturated when the optimal $\lambda\in \dom_c$ lies at the boundary: increasing them has no effect on the objective. Forcing the optimization to happen ``off'' the boundary ensures that this does not occur, while taking $\alpha$ to be small ensures that this perturbation does not cost much in terms of approximation accuracy. |
