From e794a29f1d777e65f778aaf2ff9c764ce5155f3f Mon Sep 17 00:00:00 2001 From: jeanpouget-abadie Date: Tue, 12 May 2015 19:46:27 +0200 Subject: more cosmetic changes --- paper/sections/appendix.tex | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) (limited to 'paper/sections/appendix.tex') diff --git a/paper/sections/appendix.tex b/paper/sections/appendix.tex index 79baf2f..64953a3 100644 --- a/paper/sections/appendix.tex +++ b/paper/sections/appendix.tex @@ -74,6 +74,31 @@ the proof. Proposition~\ref{prop:fi} follows. \end{proof} +\subsection{Lower Bound} + +Let us consider an algorithm $\mathcal{A}$ which verifies the recovery +guarantee of Theorem~\ref{thm:lb}: there exists a probability distribution over +measurements such that for all vectors $\theta^*$, \eqref{eq:lb} holds w.p. +$\delta$. This implies by the probabilistic method that for all distribution +$D$ over vectors $\theta$, there exists an $n\times m$ measurement matrix $X_D$ +with such that \eqref{eq:lb} holds w.p. $\delta$ ($\theta$ is now the random +variable). + +Consider the following distribution $D$: choose $S$ uniformly at random from a +``well-chosen'' set of $s$-sparse supports $\mathcal{F}$ and $t$ uniformly at +random from +$X \defeq\big\{t\in\{-1,0,1\}^m\,|\, \mathrm{supp}(t)\in\mathcal{F}\big\}$. +Define $\theta = t + w$ where +$w\sim\mathcal{N}(0, \alpha\frac{s}{m}I_m)$ and $\alpha = \Omega(\frac{1}{C})$. + +Consider the following communication game between Alice and Bob: \emph{(1)} +Alice sends $y\in\reals^m$ drawn from a Bernouilli distribution of parameter +$f(X_D\theta)$ to Bob. \emph{(2)} Bob uses $\mathcal{A}$ to recover +$\hat{\theta}$ from $y$. It can be shown that at the end of the game Bob now +has a quantity of information $\Omega(s\log \frac{m}{s})$ about $S$. By the +Shannon-Hartley theorem, this information is also upper-bounded by $\O(n\log +C)$. These two bounds together imply the theorem. + \subsection{Other continuous time processes binned to ours: proportional hazards model} -- cgit v1.2.3-70-g09d2