diff options
Diffstat (limited to 'paper/sections/lowerbound.tex')
| -rw-r--r-- | paper/sections/lowerbound.tex | 47 |
1 files changed, 28 insertions, 19 deletions
diff --git a/paper/sections/lowerbound.tex b/paper/sections/lowerbound.tex index bbd8566..fdee069 100644 --- a/paper/sections/lowerbound.tex +++ b/paper/sections/lowerbound.tex @@ -1,10 +1,12 @@ -In this section, we will consider the stable sparse recovery of section XXX. -Our goal is to obtain an information-theoretic lower bound on the number of -measurements necessary to approximately recover the parameter $\theta$ of -a cascade model from observed cascades. Similar lower bounds were obtained for -sparse linear inverse problems in XXXs. +In this section, we will consider the stable sparse recovery setting of +Section~\ref{sec:relaxing}. Our goal is to obtain an information-theoretic +lower bound on the number of measurements necessary to approximately recover +the parameter $\theta$ of a cascade model from observed cascades. Similar lower +bounds were obtained for sparse linear inverse problems in \cite{pw11, pw12, +bipw11} \begin{theorem} + \label{thm:lb} Let us consider a cascade model of the form XXX and a recovery algorithm $\mathcal{A}$ which takes as input $n$ random cascade measurements and outputs $\hat{\theta}$ such that with probability $\delta<\frac{1}{2}$ @@ -17,23 +19,27 @@ sparse linear inverse problems in XXXs. = \Omega(s\log\frac{m}{s}/\log C)$. \end{theorem} -This theorem should be contrasted with Corollary XXX: up to an additive +This theorem should be contrasted with Corollary~\ref{cor:relaxing}: up to an additive $s\log s$ factor, the number of measurements required by our algorithm is tight. -The proof of Theorem XXX follows an approach similar to XXX. Let us consider an -algorithm $\mathcal{A}$ as in the theorem. Intuitively, $\mathcal{A}$ allows -Alice and Bob to send $\Omega(s\log\frac{m}{s})$ quantity of information over -a Gaussian channel. By the Shannon-Hartley theorem, this quantity of -information is $O(n\log C)$. These two bounds together give the theorem. +The proof of Theorem~\ref{thm:lb} follows an approach similar to \cite{pw12}. +Let us consider an algorithm $\mathcal{A}$ as in the theorem. Intuitively, +$\mathcal{A}$ allows Alice and Bob to send $\Omega(s\log\frac{m}{s})$ quantity +of information over a Gaussian channel. By the Shannon-Hartley theorem, this +quantity of information is $O(n\log C)$. These two bounds together give the +theorem. -Formally, let $\mathcal{F}$ be a family such that: +Formally, let $\mathcal{F}\subset \{S\subset [1..m]\,|\, |S|=s\}$ be a family +of $s$-sparse supports such that: \begin{itemize} - \item - \item - \item + \item $|S\Delta S'|\geq s$ for $S\neq S'\in\mathcal{F}$ . + \item $\P_{S\in\mathcal{F}}[i\in S]=\frac{s}{m}$ for all $i\in [1..m]$ and + when $S$ is chosen uniformly at random in $\mathcal{F}$ + \item $\log|\mathcal{F}| = \Omega(s\log\frac{m}{s})$ \end{itemize} -and let $X = \big\{t\in\{-1,0,1\}^m\,|\, \mathrm{supp}(t)\in\mathcal{F}\big\}$. +such a family can be obtained by considering a linear code (see details in +\cite{pw11}. Let $X = \big\{t\in\{-1,0,1\}^m\,|\, \mathrm{supp}(t)\in\mathcal{F}\big\}$. Consider the following communication game between Alice and Bob. \begin{itemize} @@ -50,15 +56,18 @@ Consider the following communication game between Alice and Bob. of $\hat{\theta}$ in $X$. \end{itemize} -We have the following from XXX and XXX respectively. +We have the following from \cite{pw11} and \cite{pw12} respectively. \begin{lemma} + \label{lemma:upperinf} Let $S'=\mathrm{supp}(\hat{t})$. If $\alpha = \Omega(\frac{1}{C})$, $I(S, S') = \O(n\log C)$. \end{lemma} \begin{lemma} - If $\alpha = \Omega(\frac{1}{C})$, $I(S, S') = \Omega(s\log\frac{m}{s})$ + \label{lemma:lowerinf} + If $\alpha = \Omega(\frac{1}{C})$, $I(S, S') = \Omega(s\log\frac{m}{s})$. \end{lemma} -Lemmas XXX and XXX together give Theorem XXX. +Lemmas \ref{lemma:lowerinf} and \ref{lemma:upperinf} together give Theorem +\ref{thm:lb}. |
