aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2015-04-14 15:27:24 -0400
committerThibaut Horel <thibaut.horel@gmail.com>2015-04-14 15:27:24 -0400
commit90211ba1847251c552361a7d0b2eef1c540ef72a (patch)
treef7768a2299b7813fe6e5e792f960b2fb469eef25 /paper/sections
parentd8a68c6917f5b6053117e0145f6d4d80a8bec26b (diff)
downloadlearn-optimize-90211ba1847251c552361a7d0b2eef1c540ef72a.tar.gz
Nips format and minor tweaks. We are already close to the page limit…
Diffstat (limited to 'paper/sections')
-rw-r--r--paper/sections/negative.tex2
1 files changed, 1 insertions, 1 deletions
diff --git a/paper/sections/negative.tex b/paper/sections/negative.tex
index f88117a..2eaeb47 100644
--- a/paper/sections/negative.tex
+++ b/paper/sections/negative.tex
@@ -25,7 +25,7 @@ guarantees are sufficient for optimization. First look at learning weights in
a cover function. Maybe facility location? Sums of concave over modular are
probably too hard because of the connection to neural networks.
-\subsection{Optimizable $n\Rightarrow$ Learnable}
+\subsection{Optimizable $\nRightarrow$ Learnable}
Recall the matroid construction from~\cite{balcan2011learning}:
\begin{theorem}