aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections/introduction.tex
blob: 6eda59d018a8662845439ef6288071aaa78c79e6 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
The natural “diminishing return” property of submodular functions makes them
suitable to model many real-life scenarios. However, even when they are
suitable as a model, it is not realistic to assume that they are entirely
known. For example, most submodular functions depend on unknown parameters that
have to be learned from observations.

A recent line of work has focused on learning submodular functions. The
PMAC-learnable framework formalizes what it means to learn a submodular
function in an “overall” sense. For specific examples of submodular functions,
results are known on how well the parameters of the function can be learned
from observations.

When the experimenter wishes to optimize the unknown submodular function, the
well-known approximation algorithms cannot be readily applied because they rely
on a value query oracle, which is much stronger than what the present scenario
suggests (at best, random independent samples). The results on learning
submodular functions also seem to be insufficient in that they are myopic to
the ultimate use which will be made of the submodular function (optimizing).
The experimenter experimenter only cares about learning the function to the
extent that what he knows is sufficient to optimize it. As such, the main
question underlying the present work is the following:

\begin{center}
    \emph{What is a good model of learning for submodular functions when the
    ultimate goal is to be able to optimize them?}
\end{center}

We make the following contributions:
\begin{itemize}
    \item We show how the PMAC learning framework does not provide a satisfying
    framework for optimizing under uncertainty. In particular, we show that
    there are functions which are PMAC-learnable but not optimizable and
    vice-versa.
    \item For specific class of submodular functions, we show how to optimize
    them under uncertainty.
\end{itemize}

\subsection{Related Work}

\paragraph{Maximizing submodular functions} Vast literature on maximizing
submodular functions under many kind of constraints. All the algorithms rely on
some kind of oracle (e.g. value query or demand query) which is stronger than
what we have: access to polynomially many observations coming from
a distribution. In particular, an algorithm cannot make arbitrary (and
certainly not adaptive) queries to the oracle.

\paragraph{PMAC-learning} Interesting line of work, but oblivious to the
ultimate goal.

\paragraph{Bandits} Interesting in that it ties together learning and
optimizing. However, it assumes that arbitrary queries can be made to an oracle
allowing the experimenter to explore more efficiently (even though the standard
regret benchmark means that he will incur some cost for exploring vs
exploiting.