\subsection{Bayesian Experimental Design}\label{sec:bed} %In this section, we extend our results to Bayesian experimental design %\cite{chaloner1995bayesian}. We show that objective function \eqref{modified} %has a natural interpretation in this context, further motivating its selection %as our objective. Moreover, We extend Theorem~\ref{thm:main} to a more general Bayesian setting where it is assumed that the experimenter \E\ has a {\em prior} distribution on $\beta$: in particular, $\beta$ has a multivariate normal prior with zero mean and covariance $\sigma^2R^{-1}\in \reals^{d^2}$ (where $\sigma^2$ is the noise variance). \E\ estimates $\beta$ through \emph{maximum a posteriori estimation}: \emph{i.e.}, finding the parameter which maximizes the posterior distribution of $\beta$ given the observations $y_S$. Under the linearity assumption \eqref{model} and the Gaussian prior on $\beta$, maximum a posteriori estimation leads to the following maximization \cite{hastie}: \begin{displaymath} \hat{\beta} = \argmin_{\beta\in\reals^d} \sum_i (y_i - \T{\beta}x_i)^2 + \T{\beta}R\beta \end{displaymath} This optimization, commonly known as \emph{ridge regression}, includes an additional penalty term compared to the least squares estimation \eqref{leastsquares}. Let $\entropy(\beta)$ be the entropy of $\beta$ under this distribution, and $\entropy(\beta\mid y_S)$ the entropy of $\beta$ conditioned on the experiment outcomes $y_S$, for some $S\subseteq \mathcal{N}$. In this setting, a natural objective, originally proposed by Lindley \cite{lindley1956measure}, is to select a set of experiments $S$ that maximizes her \emph{information gain}: \begin{displaymath} I(\beta;y_S) = \entropy(\beta)-\entropy(\beta\mid y_S). \end{displaymath} Assuming normal noise variables, the information gain is equal (up to a constant) to the following value function \cite{chaloner1995bayesian}: \begin{align} V(S) = \frac{1}{2}\log\det(R + \T{X_S}X_S)\label{bayesianobjective} \end{align} Our objective for \EDP\ %\eqref{modified} clearly follows from \eqref{bayesianobjective} by setting $R=I_d$. Hence, the optimization discussed thus far can be interpreted as a maximization of the information gain when the prior distribution has a covariance $\sigma^2 I_d$, and the experimenter is solving a ridge regression problem with penalty term $\norm{x}_2^2$. Our results can be extended to the general Bayesian case, by replacing $I_d$ with the positive semidefinite matrix $R$. First, we re-set the origin of the value function so that $V(\emptyset) = 0$: \begin{align}\label{eq:normalized} \tilde{V}(S) & = \frac{1}{2}\log\det(R + \T{X_S}X_S) - \frac{1}{2}\log\det R\\ & = \frac{1}{2}\log\det(I_d + R^{-1}\T{X_S}X_S)\notag \end{align} Applying the mechanism described in algorithm~\ref{mechanism} and adapting the analysis of the approximation ratio, we get the following result which extends Theorem~\ref{thm:main}: \begin{theorem} There exists a truthful, individually rational and budget feasible mechanism for the objective function $\tilde{V}$ given by \eqref{eq:normalized}. Furthermore, for any $\varepsilon > 0$, in time $O(\text{poly}(n, d, \log\log \varepsilon^{-1}))$, the algorithm computes a set $S^*$ such that: \begin{displaymath} OPT \leq \frac{5e-1}{e-1}\frac{2\mu}{\log(1+\mu)}V(S^*) + 5.1 + \varepsilon \end{displaymath} where $\mu$ is the smallest eigenvalue of $R$. \end{theorem} \subsection{$D$-Optimality and Beyond} We now reexamine the classical $D$-optimality in \eqref{dcrit}, which is~\ref{bayesianobjective} with $R$ replaced by the zero matrix. Since \eqref{dcrit} may take arbitrarily small negative values, to define a meaningful approximation one would consider the (equivalent) maximization of $V(S) = \det\T{X_S}X_S$. %, for some strictly increasing, on-to function $f:\reals_+\to\reals_+$. However, the following lower bound implies that such an optimization goal cannot be attained under the constraints of truthfulness, budget feasibility, and individual rationality. \begin{lemma} For any $M>1$, there is no $M$-approximate, truthful, budget feasible, individually rational mechanism for a budget feasible reverse auction with value function $V(S) = \det{\T{X_S}X_S}$. For any $M>1$, there is no $M$-approximate, truthful, budget feasible, individually rational mechanism for a budget feasible reverse auction with $V(S) = \det{\T{X_S}X_S}$. \end{lemma} \begin{proof} \input{proof_of_lower_bound1} \end{proof} Beyond $D$-optimality, {\bf STRATIC, fill in..} \subsection{Beyond Linear Models} Selecting experiments that maximize the information gain in the Bayesian setup leads to a natural generalization to other learning examples beyond linear regression. In particular, in the more general PAC learning setup \cite{valiant}, the features $x_i$, $i\in \mathcal{N}$ take values in some general set $\Omega$, called the \emph{query space}, and measurements $y_i\in\reals$ are given by \begin{equation}\label{eq:hypothesis-model} y_i = h(x_i) + \varepsilon_i \end{equation} where $h\in \mathcal{H}$ for some subset $\mathcal{H}$ of all possible mappings $h:\Omega\to\reals$, called the \emph{hypothesis space}, and $\varepsilon_i$ are random variables in $\reals$, not necessarily identically distributed, that are independent \emph{conditioned on $h$}. This model is quite broad, and captures many learning tasks, such as support vector machines (SVM), linear discriminant analysis (LDA), and the following tasks: \begin{enumerate} \item\textbf{Generalized Linear Regression.} $\Omega=\reals^d$, $\mathcal{H}$ is the set of linear maps $\{h(x) = \T{\beta}x \text{ s.t. } \beta\in \reals^d\}$, and $\varepsilon_i$ are independent zero-mean normal variables, where $\expt{\varepsilon_i^2}=\sigma_i$. \item\textbf{Logistic Regression.} $\Omega=\reals^d$, $\mathcal{H}$ is the set of maps $\{h(x) = \frac{e^{\T{\beta} x}}{1+e^{\T{\beta} x}} \text{ s.t. } \beta\in\reals^d\}$, and $\varepsilon_i$ are independent conditioned on $h$ such that $$\varepsilon_i=\begin{cases} 1- h(x_i),& \text{w.~prob.}\;h(x_i)\\-h(x_i),&\text{w.~prob.}\;1-h(x_i)\end{cases}$$ \item\textbf{Learning Binary Functions with Bernoulli Noise.} $\Omega = \{0,1\}^d$, and $\mathcal{H}$ is some subset of $2^{\Omega}$, and $$\varepsilon_i =\begin{cases}0, &\text{w.~prob.}\;p\\\bar{h}(x_i)-h(x_i), &\text{w.~prob.}\;1-p\end{cases}$$ \end{enumerate} In this setup, assume that the experimenter has a prior distribution on the hypothesis $h\in \mathcal{H}$. Then, the information gain objective can be written again as the mutual information between $\beta$ and $y_S$. \begin{align}\label{general} V(S) = \entropy(\beta) -\entropy(\beta\mid y_S),\quad S\subseteq\mathcal{N} \end{align} This is a monotone set function, and it clearly satisfies $V(\emptyset)=0$. Though, in general, mutual information is not a submodular function, this specific setup leads indeed to a submodular formulation. \begin{lemma} The value function given by the information gain \eqref{general} is submodular. \end{lemma} \begin{proof} Using the chain rule for the conditional entropy we get: \begin{equation}\label{eq:chain-rule} V(S) = H(y_S) - H(y_S \mid \beta) = H(y_S) - \sum_{i\in S} H(y_i \mid \beta) \end{equation} where the second equality comes from the independence of the $y_i$'s conditioned on $\beta$. Recall that the joint entropy of a set of random variables is a submodular function. Thus, the value function is written in \eqref{eq:chain-rule} as the sum of a submodular function and a modular function. \end{proof} This lemma implies that learning an \emph{arbitrary hypothesis, under an arbitrary prior} when noise is conditionally independent leads to a submodular value function. Hence, we can apply the previously known results to get the following corollary: \begin{corollary} For Bayesian experimental design with the objective given by the information gain \eqref{general}, there exists a randomized, budget feasible, individually rational, and universally truthful mechanism. This mechanism has a complexity $O\big(\text{poly}(n,d)\big)$ and an approximation ratio of $7.91$. In cases where maximizing \eqref{general} can be done in polynomial time in the full-information setup, there exists a randomized, budget feasible, individually rational, and truthful mechanism for Bayesian experimental design. This mechanism has a complexity $O\big(\text{poly}(n,d)\big)$ and an approximation ratio of $8.34$. \end{corollary} Note however that, in many scenarios covered by this model (including the last two examples above), even computing the entropy under a given set might be a hard task---\emph{i.e.}, the value query model may not apply. Hence, identifying learning tasks in the above class for which truthful or universally truthful constant approximation mechanisms exist, or studying these problems in the context of stronger query models such as the demand model \cite{dobz2011-mechanisms,bei2012budget} remains an interesting open question. %TODO: Independent noise model. Captures models such as logistic regression, classification, etc. Arbitrary prior. Show that change in the entropy is submodular (cite Krause, Guestrin).