diff options
Diffstat (limited to 'paper/sections/algorithms.tex')
| -rw-r--r-- | paper/sections/algorithms.tex | 13 |
1 files changed, 6 insertions, 7 deletions
diff --git a/paper/sections/algorithms.tex b/paper/sections/algorithms.tex index 810653f..7dd22b5 100644 --- a/paper/sections/algorithms.tex +++ b/paper/sections/algorithms.tex @@ -49,7 +49,7 @@ An optimal solution to the above problem can be found in polynomial time using standard LP-solvers. The solution returned by the LP is \emph{fractional}, and requires a rounding procedure to return a feasible solution to our problem, where $S$ is integral. To round the solution we use the pipage rounding -method~\cite{pipage}. We defer the details to the full version of the paper~\cite{full}. +method~\cite{pipage}. We defer the details to Appendix~\ref{sec:lp-proofs}. \begin{lemma} For \mbox{\textsc{AdaptiveSeeding-LP}} defined in \eqref{eq:lp}, any fractional solution $(\boldsymbol\lambda, \mathbf{q})\in[0,1]^m\times[0,1]^n$ can be rounded to an integral solution $\bar{\boldsymbol\lambda} \in \{0,1\}^{m}$ s.t. $(1-1/e) F(\mathbf{p}\circ\mathbf{q}) \leq A(\bar{\lambda})$ in $O(m + n)$ steps. @@ -109,7 +109,7 @@ $\mathcal{O}(T,b)$. non-decreasing in $b$. \end{lemma} -The proof of this lemma can be found in the full version of the paper~\cite{full}. The main +The proof of this lemma can be found in Appendix~\ref{sec:comb-proofs}. The main idea consists in writing: \begin{multline*} \mathcal{O}(T\cup\{x\},c)-\mathcal{O}(T\cup\{x\},b)=\int_b^c\partial_+\mathcal{O}_{T\cup\{x\}}(t)dt @@ -137,10 +137,10 @@ y\in\neigh{X}\setminus T$, we need to show that: This can be done by partitioning the set $T$ into ``high value items'' (those with weight greater than $w_x$) and ``low value items'' and carefully applying Lemma~\ref{lemma:nd} to the associated subproblems. - The proof is in the full version of the paper~\cite{full}. + The proof is in Appendix~\ref{sec:comb-proofs}. Finally, Lemma~\ref{lemma:sub} can be used to show Proposition~\ref{prop:sub} -whose proof can be found in the full version~\cite{full}. +whose proof can be found in Appendix~\ref{sec:comb-proofs}. \begin{proposition}\label{prop:sub} Let $b\in\mathbf{R}^+$, then $\mathcal{O}(\neigh{S},b)$ is monotone and @@ -195,8 +195,7 @@ solution for the split with the highest value (breaking ties arbitrarily). This process can be trivially parallelized across $k-1$ machines, each performing a computation of a single split. With slightly more effort, for any $\epsilon>0$ one can parallelize over $\log_{1+\epsilon}n$ machines at the cost -of losing a factor of $\epsilon$ in the approximation guarantee (see full -version of the paper~\cite{full} for details).\newline +of losing a factor of $\epsilon$ in the approximation guarantee (see Appendix~\ref{sec:para} for details).\newline \noindent \textbf{Implementation in MapReduce.} While the previous paragraph describes how to parallelize the outer \texttt{for} loop of @@ -206,7 +205,7 @@ applied to the function $\mathcal{O}\left(\neigh{\cdot}, t\right)$. The \textsc{Sample\&Prune} approach successfully applied in \cite{mr} to obtain MapReduce algorithms for various submodular maximizations can also be applied to Algorithm~\ref{alg:comb} to cast it in the MapReduce framework. The details -of the algorithm can be found in the full version of the paper~\cite{full}. +of the algorithm can be found in Appendix~\ref{sec:mr}. \newline %A slightly more sophisticated approach is to consider only $\log n$ splits: $(1,k-1),(2,k-2),\ldots,(2^{\lfloor \log n \rfloor},1)$ and then select the best solution from this set. It is not hard to see that in comparison to the previous approach, this would reduce the approximation guarantee by a factor of at most 2: if the optimal solution is obtained by spending $t$ on the first stage and $k-t$ in the second stage, then since $t \leq 2\cdot2^{\lfloor \log t \rfloor}$ the solution computed for $(2^{\lfloor \log t \rfloor}, k - 2^{\lfloor \log t \rfloor})$ will have at least half that value. |
