summaryrefslogtreecommitdiffstats
path: root/intro.tex
blob: 0b5b70f900268480bd4c880978aef648109f5934 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
\begin{itemize}
    \item already existing field of experiment design: survey-like setup, what
    are the best points to include in your experiment? Measure of the
    usefulness of the data: variance-reduction or entropy-reduction.
    \item nowadays, there is also a big focus on purchasing data: paid surveys,
    mechanical turk, etc. that add economic aspects to the problem of
    experiment design
    \item recent advances (Singer, Chen) in the field of budgeted mechanisms
    \item we study ridge regression, very widely used in statistical learning,
    and treat it as a problem of budgeted experiment design
    \item we make the following contributions: ...
    \item extension to a more general setup which includes a wider class of
    machine learning problems
\end{itemize}

\subsection{Related work}

Two types of mechanisms: \emph{deterministic} and \emph{randomized}. For
randomized mechanisms, people seek \emph{universally truthful} mechanisms:
mechanisms which are a randomization of truthful mechanisms.

\paragraph{Symmetric submodular functions} $V(S) = g(|S|)$ where $g$ is
a concave function. Truthful deterministic mechanism with approximation ratio
of 2. This is optimal \cite{singer-mechanisms}.

\paragraph{Knapsack} deterministic: $1+\sqrt{2}\leq \alpha \leq 2 + \sqrt{2}$,
randomized: $2\leq \alpha\leq 3$ \cite{chen}

\paragraph{Matching} deterministic: $2 \leq \alpha\leq 7.32$ \cite{singer-mechanisms}

\paragraph{Coverage} deterministic: $ ?\leq\alpha\leq 31$ \cite{singer-influence}

\paragraph{Submodular function} deterministic: $1+\sqrt{2}\leq\alpha\leq ?$,
randomized: $2\leq\alpha\leq 7.91$ \cite{chen}

For wider class of functions, \cite{singer-mechanisms} proves that even for XOS
functions, the lower bound is $\sqrt{n}$ (no constant approximation) even in the
non-strategic case). To be able to say something interesting, people extend the
computational model to the \emph{demand} query model: you have access to an
oracle which given a vector of price $[p_i]_{i\in\mathcal{N}}$ returns
$S\in\argmax_{S\subseteq\mathcal{N}} V(S) - \sum_{i\in S} p_i$

\paragraph{XOS functions} fractionally additive functions: functions which can
be written as the max of a finite set of additive functions. deterministic: 768

\paragraph{Complement-free (sub-additive) objective} derministic: $O(\log^3 n)$
\cite{dobz2011-mechanisms}. randomized: $O(\frac{\log n}{\log \log n})$
\cite{bei2012budget}. More generally \cite{bei2012budget} gives a randomized
mechanism which is $O(\mathcal{I})$ where $\mathcal{I}$ is the integrality gap
of a set cover like problem which is defined from the value function. In
particular, for XOS function, this integrality gap is $O(1)$.

\paragraph{Bayesian mechanism design} when we have a distribution over the
costs of the agents. In this case the approximation guarantee is defined in
expectations. \cite{bei2012budget} $768/512$ approximation ratio for any
subadditive function.

\paragraph{Online mechanisms} \cite{learning} in the i.i.d posted price model
for symmetric submodular functions randomized $O(1)$-competitive algorithm. For
general submodular function in the secretary posted price model randomized
$O(\log n)$-competitive algorithm. In the bidding model $O(1)$-competitive
algorithm.

\stratis{What is known about the maximization of logdet in the non-strategic case? Is it NP hard? Approximation ratios?}
\thibaut{Knapsack reduces to our problem in dimension 1, hence maximizing log
det is NP-hard. The approximation ratio is at least (1-1/e) by applying the
general approximation algorithm from Sviridenko \cite{sviridenko-submodular}.}