From 07a4bc620e7056e8ef0e21102794d4cb80044f82 Mon Sep 17 00:00:00 2001 From: Thibaut Horel Date: Fri, 6 Feb 2015 15:24:51 -0500 Subject: Compression --- paper/sections/lowerbound.tex | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) (limited to 'paper/sections') diff --git a/paper/sections/lowerbound.tex b/paper/sections/lowerbound.tex index 5c35446..ed3600f 100644 --- a/paper/sections/lowerbound.tex +++ b/paper/sections/lowerbound.tex @@ -44,11 +44,8 @@ $\mathcal{F}$ and $t$ uniformly at random from $X $\theta = t + w$ where $w\sim\mathcal{N}(0, \alpha\frac{s}{m}I_m)$ and $\alpha = \Omega(\frac{1}{C})$. -Consider the following communication game between Alice and Bob: -\begin{itemize} - \item Alice sends $y\in\reals^m$ drawn from a Bernouilli distribution of parameter - $f(X_D\theta)$ to Bob. - \item Bob uses $\mathcal{A}$ to recover $\hat{\theta}$ from $y$. +Consider the following communication game between Alice and Bob: \emph{(1)} Alice sends $y\in\reals^m$ drawn from a Bernouilli distribution of parameter +$f(X_D\theta)$ to Bob. \emph{(2)} Bob uses $\mathcal{A}$ to recover $\hat{\theta}$ from $y$. \end{itemize} It can be shown that at the end of the game Bob now has a quantity of information $\Omega(s\log \frac{m}{s})$ about $S$. By the Shannon-Hartley -- cgit v1.2.3-70-g09d2