From ff6fb0a97f833e9502c3ca19590cb117e4628e80 Mon Sep 17 00:00:00 2001 From: Thibaut Horel Date: Fri, 3 Apr 2015 17:23:51 -0400 Subject: Add meeting notes --- results.tex | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/results.tex b/results.tex index 8735aa3..4968e4b 100644 --- a/results.tex +++ b/results.tex @@ -542,6 +542,32 @@ A}\backslash {\cal B}$. {\bf TODO:} A cleaner simpler example would be nice. +\section{Meeting notes: 04.03.2015} + +Consider the following function: +\begin{displaymath} + g(S) = \max\left\{\sqrt{n}|X\cap S|, |S|\right\} +\end{displaymath} +where $|X|=\sqrt{n}$. Assume that you are given polynomially many samples where +each element is included with probability $1/2$. Then with high probability all +the samples will have size roughly $n/2$, so you will observe $g(S)=|S|$ with +high probability. + +\paragraph{Claim 1:} $g$ is PAC-learnable because if you output $|S|$ then you +will be correct with high probability is $S$ is drawn from the same +distribution as above. + +\paragraph{Claim 2:} $g$ is not optimizable under budget $\sqrt{n}$ because you +never learn anything about $X$. + +\paragraph{Open Question:} Cook a similar example where $g$ is submodular and +where you are observing sets of the same size as your budget. + +\paragraph{Positive results:} Try to obtain guarantees about learning +parameters of parametric submodular functions and show whether or not these +guarantees are sufficient for optimization. First look at learning weights in +a cover function. Maybe facility location? Sums of concave over modular are +probably too hard because of the connection to neural networks. \bibliography{optimize} \bibliographystyle{apalike} -- cgit v1.2.3-70-g09d2