diff options
| author | Thibaut Horel <thibaut.horel@gmail.com> | 2015-03-05 12:51:43 -0500 |
|---|---|---|
| committer | Thibaut Horel <thibaut.horel@gmail.com> | 2015-03-05 12:51:43 -0500 |
| commit | 96f5a11aec4e9a6c94eedc3167ac42b4f62c6f93 (patch) | |
| tree | 07c7d92b1cfdd8e54057cc6dda419bce357e8c25 /paper/sections | |
| parent | 26e0b3c4128c6d8c215ee894b1f447f1984d55f0 (diff) | |
| download | fast-seeding-96f5a11aec4e9a6c94eedc3167ac42b4f62c6f93.tar.gz | |
Revert "ArXiv version"
This reverts commit 26e0b3c4128c6d8c215ee894b1f447f1984d55f0.
Diffstat (limited to 'paper/sections')
| -rw-r--r-- | paper/sections/adaptivity.tex | 10 | ||||
| -rw-r--r-- | paper/sections/algorithms.tex | 13 | ||||
| -rw-r--r-- | paper/sections/experiments.tex | 1 | ||||
| -rw-r--r-- | paper/sections/model.tex | 2 |
4 files changed, 14 insertions, 12 deletions
diff --git a/paper/sections/adaptivity.tex b/paper/sections/adaptivity.tex index 8e2e52f..d590408 100644 --- a/paper/sections/adaptivity.tex +++ b/paper/sections/adaptivity.tex @@ -149,10 +149,10 @@ adaptive policy is upper bounded by the optimal non-adaptive policy: % \end{displaymath} %\end{proposition} -The proof of this proposition can be found in Appendix~\ref{sec:ad-proofs} and -relies on the following fact: the optimal adaptive policy can be written as -a feasible non-adaptive policy, hence it provides a lower bound on the value of -the optimal non-adaptive policy. +The proof of this proposition can be found in the full version of the +paper~\cite{full} and relies on the following fact: the optimal adaptive policy +can be written as a feasible non-adaptive policy, hence it provides a lower +bound on the value of the optimal non-adaptive policy. \subsection{From Non-Adaptive to Adaptive Solutions}\label{sec:round} @@ -212,7 +212,7 @@ $S$ that we denote by $A(S)$. More precisely, denoting by $\text{OPT}_A$ the optimal value of the adaptive problem~\eqref{eq:problem}, we have the following proposition whose proof can -be found in Appendix~\ref{sec:ad-proofs}. +be found in the full version of this paper~\cite{full}. \begin{proposition}\label{prop:cr} Let $(S,\textbf{q})$ be an $\alpha$-approximate solution to the non-adaptive problem \eqref{eq:relaxed}, then $\mathrm{A}(S) \geq \alpha diff --git a/paper/sections/algorithms.tex b/paper/sections/algorithms.tex index 7dd22b5..810653f 100644 --- a/paper/sections/algorithms.tex +++ b/paper/sections/algorithms.tex @@ -49,7 +49,7 @@ An optimal solution to the above problem can be found in polynomial time using standard LP-solvers. The solution returned by the LP is \emph{fractional}, and requires a rounding procedure to return a feasible solution to our problem, where $S$ is integral. To round the solution we use the pipage rounding -method~\cite{pipage}. We defer the details to Appendix~\ref{sec:lp-proofs}. +method~\cite{pipage}. We defer the details to the full version of the paper~\cite{full}. \begin{lemma} For \mbox{\textsc{AdaptiveSeeding-LP}} defined in \eqref{eq:lp}, any fractional solution $(\boldsymbol\lambda, \mathbf{q})\in[0,1]^m\times[0,1]^n$ can be rounded to an integral solution $\bar{\boldsymbol\lambda} \in \{0,1\}^{m}$ s.t. $(1-1/e) F(\mathbf{p}\circ\mathbf{q}) \leq A(\bar{\lambda})$ in $O(m + n)$ steps. @@ -109,7 +109,7 @@ $\mathcal{O}(T,b)$. non-decreasing in $b$. \end{lemma} -The proof of this lemma can be found in Appendix~\ref{sec:comb-proofs}. The main +The proof of this lemma can be found in the full version of the paper~\cite{full}. The main idea consists in writing: \begin{multline*} \mathcal{O}(T\cup\{x\},c)-\mathcal{O}(T\cup\{x\},b)=\int_b^c\partial_+\mathcal{O}_{T\cup\{x\}}(t)dt @@ -137,10 +137,10 @@ y\in\neigh{X}\setminus T$, we need to show that: This can be done by partitioning the set $T$ into ``high value items'' (those with weight greater than $w_x$) and ``low value items'' and carefully applying Lemma~\ref{lemma:nd} to the associated subproblems. - The proof is in Appendix~\ref{sec:comb-proofs}. + The proof is in the full version of the paper~\cite{full}. Finally, Lemma~\ref{lemma:sub} can be used to show Proposition~\ref{prop:sub} -whose proof can be found in Appendix~\ref{sec:comb-proofs}. +whose proof can be found in the full version~\cite{full}. \begin{proposition}\label{prop:sub} Let $b\in\mathbf{R}^+$, then $\mathcal{O}(\neigh{S},b)$ is monotone and @@ -195,7 +195,8 @@ solution for the split with the highest value (breaking ties arbitrarily). This process can be trivially parallelized across $k-1$ machines, each performing a computation of a single split. With slightly more effort, for any $\epsilon>0$ one can parallelize over $\log_{1+\epsilon}n$ machines at the cost -of losing a factor of $\epsilon$ in the approximation guarantee (see Appendix~\ref{sec:para} for details).\newline +of losing a factor of $\epsilon$ in the approximation guarantee (see full +version of the paper~\cite{full} for details).\newline \noindent \textbf{Implementation in MapReduce.} While the previous paragraph describes how to parallelize the outer \texttt{for} loop of @@ -205,7 +206,7 @@ applied to the function $\mathcal{O}\left(\neigh{\cdot}, t\right)$. The \textsc{Sample\&Prune} approach successfully applied in \cite{mr} to obtain MapReduce algorithms for various submodular maximizations can also be applied to Algorithm~\ref{alg:comb} to cast it in the MapReduce framework. The details -of the algorithm can be found in Appendix~\ref{sec:mr}. +of the algorithm can be found in the full version of the paper~\cite{full}. \newline %A slightly more sophisticated approach is to consider only $\log n$ splits: $(1,k-1),(2,k-2),\ldots,(2^{\lfloor \log n \rfloor},1)$ and then select the best solution from this set. It is not hard to see that in comparison to the previous approach, this would reduce the approximation guarantee by a factor of at most 2: if the optimal solution is obtained by spending $t$ on the first stage and $k-t$ in the second stage, then since $t \leq 2\cdot2^{\lfloor \log t \rfloor}$ the solution computed for $(2^{\lfloor \log t \rfloor}, k - 2^{\lfloor \log t \rfloor})$ will have at least half that value. diff --git a/paper/sections/experiments.tex b/paper/sections/experiments.tex index 650e41d..2faa011 100644 --- a/paper/sections/experiments.tex +++ b/paper/sections/experiments.tex @@ -67,6 +67,7 @@ constitute our core set. We then crawled the social network of these sets: for each user, we collected her list of friends, and the degrees (number of friends) of these friends.\newline + \noindent\textbf{Data description.} Among the several verticals we collected, we select eight of them for which we will report our results. We obtained similar results for the other ones. Table~\ref{tab:data} summarizes statistics diff --git a/paper/sections/model.tex b/paper/sections/model.tex index 5d64acf..ee24eec 100644 --- a/paper/sections/model.tex +++ b/paper/sections/model.tex @@ -77,6 +77,6 @@ the transition matrix of the Markov chain. This observation led to the development of fast algorithms for influence maximization under the voter model~\cite{even-dar}.\newline -\noindent \textbf{\textsc{NP}-Hardness.} In contrast to standard influence maximization, adaptive seeding is already \textsc{NP}-Hard even for the simplest influence functions such as $f(S) = |S|$ and when all probabilities are one. We discuss this in Appendix~\ref{sec:alg-proofs}. +\noindent \textbf{\textsc{NP}-Hardness.} In contrast to standard influence maximization, adaptive seeding is already \textsc{NP}-Hard even for the simplest influence functions such as $f(S) = |S|$ and when all probabilities are one. We discuss this in the full version of the paper~\cite{full}. %In the case when $f(S)=|S|$ and all probabilities equal one, the decision problem is whether given a budget $k$ and target value $\ell$ there exists a subset of $X$ of size $k-t$ which yields a solution with expected value of $\ell$ using $t$ nodes in $\mathcal{N}(X)$. It is easy to see that this problem is \textsc{NP}-hard by reduction from \textsc{Set-Cover}. %This is equivalent to deciding whether there are $k-t$ nodes in $X$ that have $t$ neighbors in $\mathcal{N}(X)$. To see this is \textsc{NP}-hard, consider reducing from \textsc{Set-Cover} where there is one node $i$ for each input set $T_i$, $1\leq i\leq n$, with $\neigh{i}= T_i$ and integers $k,\ell$, and the output is ``yes'' if there is a family of $k$ sets in the input which cover at least $\ell$ elements, and ``no'' otherwise. |
