diff options
| author | Thibaut Horel <thibaut.horel@gmail.com> | 2016-02-29 19:39:56 -0500 |
|---|---|---|
| committer | Thibaut Horel <thibaut.horel@gmail.com> | 2016-02-29 19:39:56 -0500 |
| commit | 310718cb00370138b8d6f0e8a8222e5ecdda843c (patch) | |
| tree | 113938bc18de495bc555e146c5ab098a82d5095e /conclusion.tex | |
| parent | 49880b3de9e4a4a190e26d03dbe093e3534823de (diff) | |
| download | recommendation-310718cb00370138b8d6f0e8a8222e5ecdda843c.tar.gz | |
Diffstat (limited to 'conclusion.tex')
| -rw-r--r-- | conclusion.tex | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/conclusion.tex b/conclusion.tex index 4db1030..6b17237 100644 --- a/conclusion.tex +++ b/conclusion.tex @@ -5,7 +5,7 @@ polynomial time. %Our objective function, commonly known as the Bayes $D$-optima A natural question to ask is to what extent ou results %we present here generalize to other machine learning tasks beyond linear regression. We outline -a path to such a generalization in \cite{arxiv}: %. In +a path to such a generalization in Appendix~\ref{sec:ext}: %. In %particular, although the information gain is not generally a submodular %function, we show that for a wide class of models in which experiment |
