aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2015-02-01 09:18:22 -0500
committerThibaut Horel <thibaut.horel@gmail.com>2015-02-01 09:18:22 -0500
commit6446b4bc986be1e4ff4bd1914efc4a2e6d0b7d12 (patch)
tree87079e0a278d59937acce2755508d30ebda194fc
parent220d23bc64be0254bcbcb9adc4a26ef511d5543c (diff)
downloadcascades-6446b4bc986be1e4ff4bd1914efc4a2e6d0b7d12.tar.gz
Lower bound: beginning
-rw-r--r--paper/paper.tex2
-rw-r--r--paper/sections/lowerbound.tex64
2 files changed, 66 insertions, 0 deletions
diff --git a/paper/paper.tex b/paper/paper.tex
index 930e8ba..1e3437e 100644
--- a/paper/paper.tex
+++ b/paper/paper.tex
@@ -49,6 +49,7 @@
\usepackage{amsmath, amsfonts, amssymb, amsthm}
\newcommand{\reals}{\mathbb{R}}
\newcommand{\ints}{\mathbb{N}}
+\renewcommand{\O}{\mathcal{O}}
\DeclareMathOperator{\E}{\mathbb{E}}
\let\P\relax
\DeclareMathOperator{\P}{\mathbb{P}}
@@ -111,6 +112,7 @@ Summarize contributions:
\section{A lower bound}
\label{sec:lowerbound}
+\input{sections/lowerbound.tex}
\section{Assumptions}
\label{sec:assumptions}
diff --git a/paper/sections/lowerbound.tex b/paper/sections/lowerbound.tex
new file mode 100644
index 0000000..bbd8566
--- /dev/null
+++ b/paper/sections/lowerbound.tex
@@ -0,0 +1,64 @@
+In this section, we will consider the stable sparse recovery of section XXX.
+Our goal is to obtain an information-theoretic lower bound on the number of
+measurements necessary to approximately recover the parameter $\theta$ of
+a cascade model from observed cascades. Similar lower bounds were obtained for
+sparse linear inverse problems in XXXs.
+
+\begin{theorem}
+ Let us consider a cascade model of the form XXX and a recovery algorithm
+ $\mathcal{A}$ which takes as input $n$ random cascade measurements and
+ outputs $\hat{\theta}$ such that with probability $\delta<\frac{1}{2}$
+ (over the measurements):
+ \begin{displaymath}
+ \|\hat{\theta}-\theta^*\|_2\leq
+ C\min_{\|\theta\|_0\leq s}\|\theta-\theta^*\|_2
+ \end{displaymath}
+ where $\theta^*$ is the true paramter of the cascade model. Then $n
+ = \Omega(s\log\frac{m}{s}/\log C)$.
+\end{theorem}
+
+This theorem should be contrasted with Corollary XXX: up to an additive
+$s\log s$ factor, the number of measurements required by our algorithm is
+tight.
+
+The proof of Theorem XXX follows an approach similar to XXX. Let us consider an
+algorithm $\mathcal{A}$ as in the theorem. Intuitively, $\mathcal{A}$ allows
+Alice and Bob to send $\Omega(s\log\frac{m}{s})$ quantity of information over
+a Gaussian channel. By the Shannon-Hartley theorem, this quantity of
+information is $O(n\log C)$. These two bounds together give the theorem.
+
+Formally, let $\mathcal{F}$ be a family such that:
+\begin{itemize}
+ \item
+ \item
+ \item
+\end{itemize}
+and let $X = \big\{t\in\{-1,0,1\}^m\,|\, \mathrm{supp}(t)\in\mathcal{F}\big\}$.
+
+Consider the following communication game between Alice and Bob.
+\begin{itemize}
+ \item Alice chooses $S$ uniformly at random from $\mathcal{F}$ and
+ $t$ uniformly at random from $X$ subject to $\mathrm{supp}(x) = S$
+ \item Let $w\sim\mathcal{N}(0, \alpha \frac{s}{m}I_m)$ and $\theta
+ = t + w$. Since for all $\theta$, $\mathcal{A}$ recovers $\theta$ with
+ probability $1-\delta$ over the measurements, there exists a fixed set
+ of measurements $x_1,\ldots, x_n$ such that with probability
+ $1-\delta$ over $\theta$, $\mathcal{A}$ recovers $\theta$. Alice sends
+ those measurements to Bob.
+ \item Bob uses $\mathcal{A}$ to recover $\hat{\theta}$ from the
+ measurements, then computes $\hat{t}$ the best $\ell_2$ approximation
+ of $\hat{\theta}$ in $X$.
+\end{itemize}
+
+We have the following from XXX and XXX respectively.
+
+\begin{lemma}
+ Let $S'=\mathrm{supp}(\hat{t})$. If $\alpha = \Omega(\frac{1}{C})$, $I(S, S')
+ = \O(n\log C)$.
+\end{lemma}
+
+\begin{lemma}
+ If $\alpha = \Omega(\frac{1}{C})$, $I(S, S') = \Omega(s\log\frac{m}{s})$
+\end{lemma}
+
+Lemmas XXX and XXX together give Theorem XXX.