aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections/introduction.tex
diff options
context:
space:
mode:
Diffstat (limited to 'paper/sections/introduction.tex')
-rw-r--r--paper/sections/introduction.tex54
1 files changed, 54 insertions, 0 deletions
diff --git a/paper/sections/introduction.tex b/paper/sections/introduction.tex
new file mode 100644
index 0000000..6eda59d
--- /dev/null
+++ b/paper/sections/introduction.tex
@@ -0,0 +1,54 @@
+The natural “diminishing return” property of submodular functions makes them
+suitable to model many real-life scenarios. However, even when they are
+suitable as a model, it is not realistic to assume that they are entirely
+known. For example, most submodular functions depend on unknown parameters that
+have to be learned from observations.
+
+A recent line of work has focused on learning submodular functions. The
+PMAC-learnable framework formalizes what it means to learn a submodular
+function in an “overall” sense. For specific examples of submodular functions,
+results are known on how well the parameters of the function can be learned
+from observations.
+
+When the experimenter wishes to optimize the unknown submodular function, the
+well-known approximation algorithms cannot be readily applied because they rely
+on a value query oracle, which is much stronger than what the present scenario
+suggests (at best, random independent samples). The results on learning
+submodular functions also seem to be insufficient in that they are myopic to
+the ultimate use which will be made of the submodular function (optimizing).
+The experimenter experimenter only cares about learning the function to the
+extent that what he knows is sufficient to optimize it. As such, the main
+question underlying the present work is the following:
+
+\begin{center}
+ \emph{What is a good model of learning for submodular functions when the
+ ultimate goal is to be able to optimize them?}
+\end{center}
+
+We make the following contributions:
+\begin{itemize}
+ \item We show how the PMAC learning framework does not provide a satisfying
+ framework for optimizing under uncertainty. In particular, we show that
+ there are functions which are PMAC-learnable but not optimizable and
+ vice-versa.
+ \item For specific class of submodular functions, we show how to optimize
+ them under uncertainty.
+\end{itemize}
+
+\subsection{Related Work}
+
+\paragraph{Maximizing submodular functions} Vast literature on maximizing
+submodular functions under many kind of constraints. All the algorithms rely on
+some kind of oracle (e.g. value query or demand query) which is stronger than
+what we have: access to polynomially many observations coming from
+a distribution. In particular, an algorithm cannot make arbitrary (and
+certainly not adaptive) queries to the oracle.
+
+\paragraph{PMAC-learning} Interesting line of work, but oblivious to the
+ultimate goal.
+
+\paragraph{Bandits} Interesting in that it ties together learning and
+optimizing. However, it assumes that arbitrary queries can be made to an oracle
+allowing the experimenter to explore more efficiently (even though the standard
+regret benchmark means that he will incur some cost for exploring vs
+exploiting.