aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections/negative.tex
diff options
context:
space:
mode:
Diffstat (limited to 'paper/sections/negative.tex')
-rw-r--r--paper/sections/negative.tex2
1 files changed, 1 insertions, 1 deletions
diff --git a/paper/sections/negative.tex b/paper/sections/negative.tex
index f88117a..2eaeb47 100644
--- a/paper/sections/negative.tex
+++ b/paper/sections/negative.tex
@@ -25,7 +25,7 @@ guarantees are sufficient for optimization. First look at learning weights in
a cover function. Maybe facility location? Sums of concave over modular are
probably too hard because of the connection to neural networks.
-\subsection{Optimizable $n\Rightarrow$ Learnable}
+\subsection{Optimizable $\nRightarrow$ Learnable}
Recall the matroid construction from~\cite{balcan2011learning}:
\begin{theorem}