From 722aa31fef933d37bbf14c27a4055a20a1df5b2f Mon Sep 17 00:00:00 2001 From: Stratis Ioannidis Date: Sat, 3 Nov 2012 23:47:39 -0700 Subject: general --- general.tex | 2 ++ 1 file changed, 2 insertions(+) (limited to 'general.tex') diff --git a/general.tex b/general.tex index 589b176..cb2154b 100644 --- a/general.tex +++ b/general.tex @@ -67,4 +67,6 @@ eigenvalue is larger than 1. Hence $\log\det R\geq 0$ and an approximation on $\tilde{V}$ gives an approximation ration on $V$ (see discussion above). \subsection{Beyond Linear Models} +Selecting experiments that maximize the information gain in the Bayesian setup leads to a natural generalization to other learning examples beyond linear regression. In particular, suppose that the measurements + TODO: Independent noise model. Captures models such as logistic regression, classification, etc. Arbitrary prior. Show that change in the entropy is submodular (cite Krause, Guestrin). -- cgit v1.2.3-70-g09d2