summaryrefslogtreecommitdiffstats
path: root/problem.tex
diff options
context:
space:
mode:
Diffstat (limited to 'problem.tex')
-rw-r--r--problem.tex193
1 files changed, 193 insertions, 0 deletions
diff --git a/problem.tex b/problem.tex
new file mode 100644
index 0000000..fb9f8e1
--- /dev/null
+++ b/problem.tex
@@ -0,0 +1,193 @@
+\subsection{Notations}
+
+Throughout the paper, we will make use of the following notations: if $x$ is
+a (column) vector in $\mathbf{R}^d$, $x^*$ denotes its transposed (line)
+vector. Thus, the standard inner product between two vectors $x$ and $y$ is
+simply $x^* y$. $\norm{x}_2 = x^*x$ will denote the $L_2$ norm of $x$.
+
+We will also often use the following order over symmetric matrices: if $A$ and
+$B$ are two $d\times d$ and $B$ are two $d\times d$ real symmetric matrices, we
+write that $A\leq B$ iff:
+\begin{displaymath}
+ \forall x\in\mathbf{R}^d,\quad
+ x^*Ax \leq x^*Bx
+\end{displaymath}
+That is, iff $B-A$ is symmetric semi-definite positive.
+
+This order let us define the notion of a \emph{decreasing} or \emph{convex}
+matrix function similarly to their real counterparts. In particular, let us
+recall that the matrix inversion is decreasing and convex over symmetric
+definite positive matrices.
+
+\subsection{Data model}
+
+There is a set of $n$ users, $\mathcal{N} = \{1,\ldots, n\}$. Each user
+$i\in\mathcal{N}$ has a public vector of features $x_i\in\mathbf{R}^d$ and an
+undisclosed piece of information $y_i\in\mathbf{R}$. We assume that the data
+has already been normalized so that $\norm{x_i}_2\leq 1$ for all
+$i\in\mathcal{N}$.
+
+The experimenter is going to select a set of users and ask them to reveal their
+private piece of information. We are interested in a \emph{survey setup}: the
+experimenter has not seen the data yet, but he wants to know which users he
+should be selecting. His goal is to learn the model underlying the data. Here,
+we assume a linear model:
+\begin{displaymath}
+ \forall i\in\mathcal{N},\quad y_i = \beta^* x_i + \varepsilon_i
+\end{displaymath}
+where $\beta\in\mathbf{R}^d$ and $\varepsilon_i\in\mathbf{R}$ follows a normal
+distribution of mean $0$ and variance $\sigma^2$. Furthermore, we assume the
+error $\varepsilon$ to be independent of the user:
+$(\varepsilon_i)_{i\in\mathcal{N}}$ are mutually independent.
+
+After observing the data, the experimenter could simply do linear regression to
+learn the model parameter $\beta$. However, in a more general setup, the
+experimenter has a prior knowledge about $\beta$, a distribution over
+$\mathbf{R}^d$. After observing the data, the experimenter performs
+\emph{maximum a posteriori estimation}: computing the point which maximizes the
+posterior distribution of $\beta$ given the observations.
+
+Here, we will assume, as it is often done, that the prior distribution is
+a multivariate normal distribution of mean zero and covariance matrix $\kappa
+I_d$. Maximum a posteriori estimation leads to the following maximization
+problem:
+\begin{displaymath}
+ \beta_{\text{max}} = \argmax_{\beta\in\mathbf{R}^d} \sum_i (y_i - \beta^*x_i)^2
+ + \frac{1}{\mu}\sum_i \norm{\beta}_2^2
+\end{displaymath}
+which is the well-known \emph{ridge regression}. $\mu
+= \frac{\kappa}{\sigma^2}$ is the regularization parameter. Ridge regression
+can thus be seen as linear regression with a regularization term which
+prevents $\beta$ from having a large $L_2$-norm.
+
+\subsection{Value of data}
+
+Because the user private variables $y_i$ have not been observed yet when the
+experimenter has to decide which users to include in his experiment, we treat
+$\beta$ as a random variable whose distribution is updated after observing the
+data.
+
+Let us recall that if $\beta$ is random variable over $\mathbf{R}^d$ whose
+probability distribution has a density function $f$ with respect to the
+Lebesgue measure, its entropy is given by:
+\begin{displaymath}
+ \mathbb{H}(\beta) \defeq - \int_{b\in\mathbf{R}^d} \log f(b) f(b)\text{d}b
+\end{displaymath}
+
+A usual way to measure the decrease of uncertainty induced by the observation
+of data is to use the entropy. This leads to the following definition of the
+value of data called the \emph{value of information}:
+\begin{displaymath}
+ \forall S\subset\mathcal{N},\quad V(S) = \mathbb{H}(\beta)
+ - \mathbb{H}(\beta\,|\,
+ Y_S)
+\end{displaymath}
+where $Y_S = \{y_i,\,i\in S\}$ is the set of observed data.
+
+\begin{theorem}
+ Under the ridge regression model explained in section TODO, the value of data
+ is equal to:
+ \begin{align*}
+ \forall S\subset\mathcal{N},\; V(S)
+ & = \frac{1}{2}\log\det\left(I_d
+ + \mu\sum_{i\in S} x_ix_i^*\right)\\
+ & \defeq \frac{1}{2}\log\det A(S)
+ \end{align*}
+\end{theorem}
+
+\begin{proof}
+
+Let us denote by $X_S$ the matrix whose rows are the vectors $(x_i^*)_{i\in
+S}$. Observe that $A_S$ can simply be written as:
+\begin{displaymath}
+ A_S = I_d + \mu X_S^* X_S
+\end{displaymath}
+
+Let us recall that the entropy of a multivariate normal variable $B$ over
+$\mathbf{R}^d$ of covariance $\Sigma I_d$ is given by:
+\begin{equation}\label{eq:multivariate-entropy}
+ \mathbb{H}(B) = \frac{1}{2}\log\big((2\pi e)^d \det \Sigma I_d\big)
+\end{equation}
+
+Using the chain rule for conditional entropy, we get that:
+\begin{displaymath}
+ V(S) = \mathbb{H}(Y_S) - \mathbb{H}(Y_S\,|\,\beta)
+\end{displaymath}
+
+Conditioned on $\beta$, $(Y_S)$ follows a multivariate normal
+distribution of mean $X\beta$ and of covariance matrix $\sigma^2 I_n$. Hence:
+\begin{equation}\label{eq:h1}
+ \mathbb{H}(Y_S\,|\,\beta)
+ = \frac{1}{2}\log\left((2\pi e)^n \det(\sigma^2I_n)\right)
+\end{equation}
+
+$(Y_S)$ also follows a multivariate normal distribution of mean zero. Let us
+compute its covariance matrix, $\Sigma_Y$:
+\begin{align*}
+ \Sigma_Y & = \expt{YY^*} = \expt{(X_S\beta + E)(X_S\beta + E)^*}\\
+ & = \kappa X_S X_S^* + \sigma^2I_n
+\end{align*}
+Thus, we get that:
+\begin{equation}\label{eq:h2}
+ \mathbb{H}(Y_S)
+ = \frac{1}{2}\log\left((2\pi e)^n \det(\kappa X_S X_S^* + \sigma^2 I_n)\right)
+\end{equation}
+
+Combining \eqref{eq:h1} and \eqref{eq:h2} we get:
+\begin{displaymath}
+ V(S) = \frac{1}{2}\log\det\left(I_n+\frac{\kappa}{\sigma^2}X_S
+ X_S^*\right)
+\end{displaymath}
+
+Finally, we can use Sylvester's determinant theorem to get the result.
+\end{proof}
+
+It is also interesting to look at the marginal contribution of a user to a set: the
+increase of value induced by adding a user to an already existing set of users.
+We have the following lemma.
+
+\begin{lemma}[Marginal contribution]
+ \begin{displaymath}
+ \Delta_i V(S)\defeq V(S\cup\{i\}) - V(S)
+ = \frac{1}{2}\log\left(1 + \mu x_i^*A(S)^{-1}x_i\right)
+ \end{displaymath}
+\end{lemma}
+
+\begin{proof}
+ We have:
+ \begin{align*}
+ V(S\cup\{i\}) & = \frac{1}{2}\log\det A(S\cup\{i\})\\
+ & = \frac{1}{2}\log\det\left(A(S) + \mu x_i x_i^*\right)\\
+ & = V(S) + \frac{1}{2}\log\det\left(I_d + \mu A(S)^{-1}x_i
+ x_i^*\right)\\
+ & = V(S) + \frac{1}{2}\log\left(1 + \mu x_i^* A(S)^{-1}x_i\right)
+ \end{align*}
+ where the last equality comes from Sylvester's determinant formula.
+\end{proof}
+
+Because $A(S)$ is symmetric definite positive, the marginal contribution is
+positive, which proves that the value function is set increasing. Furthermore,
+it is easy to see that if $S\subset S'$, then $A(S)\leq A(S')$. Using the fact
+that matrix inversion is decreasing, we see that the marginal contribution of
+a fixed user is a set decreasing function. This is the \emph{submodularity} of
+the value function.
+
+TODO? Explain what are the points which are the most valuable : points which
+are aligned along directions where the spread over the already existing points
+is small.
+
+\subsection{Auction}
+
+TODO Explain the optimization problem, why it has to be formulated as an auction
+problem. Explain the goals:
+\begin{itemize}
+ \item truthful
+ \item individually rational
+ \item budget feasible
+ \item has a good approximation ratio
+
+TODO Explain what is already known: it is ok when the function is submodular. When
+should we introduce the notion of submodularity?
+\end{itemize}
+
+