summaryrefslogtreecommitdiffstats
path: root/problem.tex
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-07 00:29:53 -0700
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-07 00:29:53 -0700
commitc5c7e55c0d14009d47d838efad3a799cb59a3d77 (patch)
tree769549d4994bc96683ae44fd46d852c838f296d8 /problem.tex
parent767e7ba86a7e916680faef79f885f1a5ce8c6b2b (diff)
downloadrecommendation-c5c7e55c0d14009d47d838efad3a799cb59a3d77.tar.gz
app:properties
Diffstat (limited to 'problem.tex')
-rw-r--r--problem.tex10
1 files changed, 5 insertions, 5 deletions
diff --git a/problem.tex b/problem.tex
index 851f691..89ffa9d 100644
--- a/problem.tex
+++ b/problem.tex
@@ -27,7 +27,7 @@ Then, \E\ estimates $\beta$ through \emph{maximum a posteriori estimation}: \emp
\end{align}
where the last equality is obtained by setting $\nabla_{\beta}\prob(\beta\mid y_S)$ to zero and solving the resulting linear system; in \eqref{ridge}, $X_S\defeq[x_i]_{i\in S}\in \reals^{|S|\times d}$ is the matrix of experiment features and
$y_S\defeq[y_i]_{i\in S}\in\reals^{|S|}$ are the observed measurements.
-This optimization, commonly known as \emph{ridge regression}, includes an additional quadratic penalty term compared to the standard least squares estimation.
+This optimization, commonly known as \emph{ridge regression}, includes an additional quadratic penalty term $\beta^TR\beta$ compared to the standard least squares estimation.
% under \eqref{model}, the maximum likelihood estimator of $\beta$ is the \emph{least squares} estimator: for $X_S=[x_i]_{i\in S}\in \reals^{|S|\times d}$ the matrix of experiment features and
%$y_S=[y_i]_{i\in S}\in\reals^{|S|}$ the observed measurements,
%\begin{align} \hat{\beta} &=\max_{\beta\in\reals^d}\prob(y_S;\beta) =\argmin_{\beta\in\reals^d } \sum_{i\in S}(\T{\beta}x_i-y_i)^2 \nonumber\\
@@ -109,11 +109,11 @@ the optimal value achievable in the full-information case.
reduces to EDP with dimension $d=1$ by mapping the weight of each item, say,
$w_i$, to an experiment with $x_i^2=w_i$.% Moreover, \eqref{modified} is submodular, monotone and satisfies $V(\emptyset) = 0$.
- Note that \eqref{modified} is a submodular set function, \emph{i.e.},
-$V(S)+V(T)\geq V(S\cup T)+V(S\cap T)$ for all $S,T\subseteq \mathcal{N}$. It is
+The value function \eqref{modified} has the following properties, which are proved in Appendix~\ref{app:properties}. First, it is non-negative, \emph{i.e.}, $V(S)\geq 0$
+for all $S\subseteq\mathcal{N}$. Second, it is
also monotone, \emph{i.e.}, $V(S)\leq V(T)$ for all $S\subset T$, with
-$V(\emptyset)=0$. Finally, it is non-negative, \emph{i.e.}, $V(S)\geq 0$
-for all $S\subseteq\mathcal{N}$, since the matrix $\T{X_S}X_S$ is positive semi-definite for all $S\subseteq \mathcal{N}$.
+$V(\emptyset)=0$. Finally, it is a submodular, \emph{i.e.},
+$V(S)+V(T)\geq V(S\cup T)+V(S\cap T)$ for all $S,T\subseteq \mathcal{N}$.
%Finally, we define the strategic version of \EDP{} (denoted by \SEDP) by adding the additional constraints of truthfulness, individual rationality, and normal payments, as described in the previous section.
The above three properties imply that a greedy algorithm yields a constant approximation ratio to \EDP.
In particular, consider the greedy algorithm in which, for