diff options
| author | jeanpouget-abadie <jean.pougetabadie@gmail.com> | 2015-05-12 19:46:27 +0200 |
|---|---|---|
| committer | jeanpouget-abadie <jean.pougetabadie@gmail.com> | 2015-05-12 19:46:27 +0200 |
| commit | e794a29f1d777e65f778aaf2ff9c764ce5155f3f (patch) | |
| tree | 1e8584a4285a5b44066e610ec106ee165630a779 /paper/sections/appendix.tex | |
| parent | 4110319283b3b5b667d215bd44f23f8cd1a7cf46 (diff) | |
| download | cascades-e794a29f1d777e65f778aaf2ff9c764ce5155f3f.tar.gz | |
more cosmetic changes
Diffstat (limited to 'paper/sections/appendix.tex')
| -rw-r--r-- | paper/sections/appendix.tex | 25 |
1 files changed, 25 insertions, 0 deletions
diff --git a/paper/sections/appendix.tex b/paper/sections/appendix.tex index 79baf2f..64953a3 100644 --- a/paper/sections/appendix.tex +++ b/paper/sections/appendix.tex @@ -74,6 +74,31 @@ the proof. Proposition~\ref{prop:fi} follows. \end{proof} +\subsection{Lower Bound} + +Let us consider an algorithm $\mathcal{A}$ which verifies the recovery +guarantee of Theorem~\ref{thm:lb}: there exists a probability distribution over +measurements such that for all vectors $\theta^*$, \eqref{eq:lb} holds w.p. +$\delta$. This implies by the probabilistic method that for all distribution +$D$ over vectors $\theta$, there exists an $n\times m$ measurement matrix $X_D$ +with such that \eqref{eq:lb} holds w.p. $\delta$ ($\theta$ is now the random +variable). + +Consider the following distribution $D$: choose $S$ uniformly at random from a +``well-chosen'' set of $s$-sparse supports $\mathcal{F}$ and $t$ uniformly at +random from +$X \defeq\big\{t\in\{-1,0,1\}^m\,|\, \mathrm{supp}(t)\in\mathcal{F}\big\}$. +Define $\theta = t + w$ where +$w\sim\mathcal{N}(0, \alpha\frac{s}{m}I_m)$ and $\alpha = \Omega(\frac{1}{C})$. + +Consider the following communication game between Alice and Bob: \emph{(1)} +Alice sends $y\in\reals^m$ drawn from a Bernouilli distribution of parameter +$f(X_D\theta)$ to Bob. \emph{(2)} Bob uses $\mathcal{A}$ to recover +$\hat{\theta}$ from $y$. It can be shown that at the end of the game Bob now +has a quantity of information $\Omega(s\log \frac{m}{s})$ about $S$. By the +Shannon-Hartley theorem, this information is also upper-bounded by $\O(n\log +C)$. These two bounds together imply the theorem. + \subsection{Other continuous time processes binned to ours: proportional hazards model} |
