aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections/introduction.tex
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2015-04-14 15:20:19 -0400
committerThibaut Horel <thibaut.horel@gmail.com>2015-04-14 15:20:19 -0400
commitd8a68c6917f5b6053117e0145f6d4d80a8bec26b (patch)
tree2abe1acc26ced6358dc2d112fea88147deac6283 /paper/sections/introduction.tex
parentf5a0b1da9a7ff3346ac3af4ad9c1dd0fd9e71f46 (diff)
downloadlearn-optimize-d8a68c6917f5b6053117e0145f6d4d80a8bec26b.tar.gz
Starting paper draft
Diffstat (limited to 'paper/sections/introduction.tex')
-rw-r--r--paper/sections/introduction.tex27
1 files changed, 27 insertions, 0 deletions
diff --git a/paper/sections/introduction.tex b/paper/sections/introduction.tex
new file mode 100644
index 0000000..bcb406f
--- /dev/null
+++ b/paper/sections/introduction.tex
@@ -0,0 +1,27 @@
+Let $\Omega$ be the universe of elements and $f$ a function defined on subsets
+of $\Omega$: $f : S \in 2^{[\Omega]} \mapsto f(S) \in \mathbb{R}$. Let $K$ be a
+collection of sets of $2^{[\Omega]}$, which we call \emph{constraints}. Let
+$S^*_K$ be any solution to $\max_{S \in K} f(S)$, which we will also denote by
+$S^*$ when there is no ambiguity. Let $L$ be the problem size, which is often
+(but not always) equal to $|\Omega|$.
+
+In general, we say we can efficiently optimize a function $f$ under constraint
+$K$ when we have a polynomial-time algorithm making adaptive value queries to
+$f$,which returns a set $S$ such that $S \in K$ and $f(S) \geq \alpha f(S^*)$
+with high probability and $\alpha$ an absolute constant.
+
+Here, we consider the scenario where we cannot make adaptive value queries, and
+in fact, where we cannot make queries at all! Instead, we suppose that we
+observe a polynomial number of set-value pairs $(S, f(S))$ where $S$ is taken
+from a known distribution $D$. We say we can efficiently \emph{passively
+optimize} $f$ under distribution $D$ or $D-$optimize $f$ under constraints $K$
+when, after observing ${\cal O}(L^c)$ set-value pairs from $D$ where $c > 0$ is
+an absolute constant, we can return a set $S$ such that $S \in K$ and $f(S)
+\geq \alpha f(S^*)$ with high probability and $\alpha$ an absolute constant.
+
+In the case of \emph{passive} observations of set-value pairs under a
+distribution $D$ for a function $f$, recent research has focused on whether we
+can efficiently and approximately \emph{learn} $f$. This was formalized in the
+PMAC model from \cite{balcan2011learning}. When thinking about passive
+optimization, it is necessary to understand the link between being able to
+ $D-PMAC$ learn $f$ and being able to $D-$optimize $f$.