aboutsummaryrefslogtreecommitdiffstats
path: root/results.tex
diff options
context:
space:
mode:
Diffstat (limited to 'results.tex')
-rw-r--r--results.tex26
1 files changed, 26 insertions, 0 deletions
diff --git a/results.tex b/results.tex
index 8735aa3..4968e4b 100644
--- a/results.tex
+++ b/results.tex
@@ -542,6 +542,32 @@ A}\backslash {\cal B}$.
{\bf TODO:} A cleaner simpler example would be nice.
+\section{Meeting notes: 04.03.2015}
+
+Consider the following function:
+\begin{displaymath}
+ g(S) = \max\left\{\sqrt{n}|X\cap S|, |S|\right\}
+\end{displaymath}
+where $|X|=\sqrt{n}$. Assume that you are given polynomially many samples where
+each element is included with probability $1/2$. Then with high probability all
+the samples will have size roughly $n/2$, so you will observe $g(S)=|S|$ with
+high probability.
+
+\paragraph{Claim 1:} $g$ is PAC-learnable because if you output $|S|$ then you
+will be correct with high probability is $S$ is drawn from the same
+distribution as above.
+
+\paragraph{Claim 2:} $g$ is not optimizable under budget $\sqrt{n}$ because you
+never learn anything about $X$.
+
+\paragraph{Open Question:} Cook a similar example where $g$ is submodular and
+where you are observing sets of the same size as your budget.
+
+\paragraph{Positive results:} Try to obtain guarantees about learning
+parameters of parametric submodular functions and show whether or not these
+guarantees are sufficient for optimization. First look at learning weights in
+a cover function. Maybe facility location? Sums of concave over modular are
+probably too hard because of the connection to neural networks.
\bibliography{optimize} \bibliographystyle{apalike}