summaryrefslogtreecommitdiffstats
path: root/algorithm.tex
diff options
context:
space:
mode:
authorunknown <kvetonb@PALOL4KG92Q1.am.thmulti.com>2012-03-01 02:24:18 -0800
committerunknown <kvetonb@PALOL4KG92Q1.am.thmulti.com>2012-03-01 02:24:18 -0800
commitd2f62d848bf0f499432446acb6ae118a2d656f60 (patch)
tree93a47424abf01a566dd74e45ca722f1efa3618be /algorithm.tex
parent007b83e185a348108af232bf8752722a4e46796b (diff)
downloadkinect-d2f62d848bf0f499432446acb6ae118a2d656f60.tar.gz
Algorithm
Diffstat (limited to 'algorithm.tex')
-rw-r--r--algorithm.tex6
1 files changed, 3 insertions, 3 deletions
diff --git a/algorithm.tex b/algorithm.tex
index 15eec4b..c9130cc 100644
--- a/algorithm.tex
+++ b/algorithm.tex
@@ -9,10 +9,10 @@ Provide a guideline for the rest of the section.
The nearest-neighbor (NN) classifier worked well on the dead body dataset. It is a well-know fact that the decision boundary of the NN classifier is piecewise linear [cite]. A mixture of Gaussians [cite] is a generative model that can be used for NN classification. In our domain, each person is represented by a single Gaussian, which is centered in the person's mean profile. All Gaussians have the same covariance matrix, which encodes the variance and covariance of attributes around the mean profiles. In particular, our probabilistic model is given by:
\begin{align}
- P(\bx, y) = N(\bx | \bu_y, \Sigma) P(y),
+ P(\bx, y) = \cN(\bx | \bu_y, \Sigma) P(y),
\label{eq:mixture of Gaussians}
\end{align}
-where $P(y)$ is the probability that the person y appears in front of the camera and $N(\bx | \bu_y, \Sigma)$ is the probability that the profile $\bx$ belongs to the person $y$. The probability is modeled as a normal distribution, which is centered in the mean profile of the person $y$ and has a covariance matrix $\Sigma$. Since all Gaussians in the mixture have the same covariance matrix, all decision boundaries, $P(\bx, y_1) = P(\bx, y_2)$ for some $y_1$ and $y_2$, are linear.
+where $P(y)$ is the probability that the person y appears in front of the camera and $\cN(\bx | \bu_y, \Sigma)$ is the probability that the profile $\bx$ belongs to the person $y$. The probability is modeled as a normal distribution, which is centered in the mean profile of the person $y$ and has a covariance matrix $\Sigma$. Since all Gaussians in the mixture have the same covariance matrix, all decision boundaries, $P(\bx, y_1) = P(\bx, y_2)$ for some $y_1$ and $y_2$, are linear.
The parameters of our model can be easily learned using maximum-likelihood (ML) estimation [cite]. The probability $P(y)$ is the fraction of time when the person $y$ appears in training set. The mean profile $\bu_y$ is estimated as the mean of all profiles corresponding to the person $y$. The covariance matrix $\Sigma$ is estimated as $\Sigma = \sum_y P(y) \Sigma_y$, where $\Sigma_y$ is the covariance matrix of profiles for the person $y$.
@@ -20,7 +20,7 @@ The inference in our model can be done efficiently. In particular, note that:
\begin{align}
P(y | \bx) =
\frac{P(\bx | y) P(y)}{\sum_y P(\bx | y) P(y)} =
- \frac{N(\bx | \bu_y, \Sigma) P(y)}{\sum_y N(\bx | \bu_y, \Sigma) P(y)}.
+ \frac{\cN(\bx | \bu_y, \Sigma) P(y)}{\sum_y \cN(\bx | \bu_y, \Sigma) P(y)}.
\label{eq:inference}
\end{align}