diff options
| author | Stratis Ioannidis <stratis@stratis-Latitude-E6320.(none)> | 2012-10-30 18:48:59 -0700 |
|---|---|---|
| committer | Stratis Ioannidis <stratis@stratis-Latitude-E6320.(none)> | 2012-10-30 18:48:59 -0700 |
| commit | 1b73fb44997220a89c90ffa278e2cf6d9dc5bc6b (patch) | |
| tree | e266224be00625ae8e4fc6b2ce32fc7a6701dc75 /problem.tex | |
| parent | 60efcc1d97c0cad6446db44dd1b25baf67c57566 (diff) | |
| parent | 460f799b52b7a4679df9eb843ec22d98b0283dcb (diff) | |
| download | recommendation-1b73fb44997220a89c90ffa278e2cf6d9dc5bc6b.tar.gz | |
conflict
Diffstat (limited to 'problem.tex')
| -rw-r--r-- | problem.tex | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/problem.tex b/problem.tex index d7cf38c..81b7656 100644 --- a/problem.tex +++ b/problem.tex @@ -79,7 +79,7 @@ problem: This optimization, commonly known as \emph{ridge regression}, reduces to a least squares fit for $\mu=\infty$. For finite $\mu$, ridge regression acts as a sort of ``Occam's razor'', favoring a \emph{parsimonious} model for $\beta$: among two vectors with the same square error, the one with the smallest norm is preferred. This is consistent with the Gaussian prior on $\beta$, which implies that vectors with small norms are more likely. %In practice, ridge regression is known to give better prediction results over new data than model parameters computed through a least squares fit. -\subsection{A Budgeted Auction} +\subsection{A Budgeted Auction}\label{sec:auction} TODO Explain the optimization problem, why it has to be formulated as an auction problem. Explain the goals: |
