aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2015-02-06 15:25:41 -0500
committerThibaut Horel <thibaut.horel@gmail.com>2015-02-06 15:25:41 -0500
commit6dfc20ac64ca52c438d4a311377631e3ebd603ed (patch)
treef915f206d14c87a900db387d05b4f034bc71245c
parent07a4bc620e7056e8ef0e21102794d4cb80044f82 (diff)
downloadcascades-6dfc20ac64ca52c438d4a311377631e3ebd603ed.tar.gz
Fix bug itemize
-rw-r--r--paper/sections/lowerbound.tex1
1 files changed, 0 insertions, 1 deletions
diff --git a/paper/sections/lowerbound.tex b/paper/sections/lowerbound.tex
index ed3600f..becd13f 100644
--- a/paper/sections/lowerbound.tex
+++ b/paper/sections/lowerbound.tex
@@ -46,7 +46,6 @@ $\theta = t + w$ where $w\sim\mathcal{N}(0, \alpha\frac{s}{m}I_m)$ and $\alpha
Consider the following communication game between Alice and Bob: \emph{(1)} Alice sends $y\in\reals^m$ drawn from a Bernouilli distribution of parameter
$f(X_D\theta)$ to Bob. \emph{(2)} Bob uses $\mathcal{A}$ to recover $\hat{\theta}$ from $y$.
-\end{itemize}
It can be shown that at the end of the game Bob now has a quantity of
information $\Omega(s\log \frac{m}{s})$ about $S$. By the Shannon-Hartley
theorem, this information is also