summaryrefslogtreecommitdiffstats
path: root/algorithm.tex
diff options
context:
space:
mode:
authorunknown <Brano@Toshibicka.(none)>2012-03-04 12:30:26 -0800
committerunknown <Brano@Toshibicka.(none)>2012-03-04 12:30:26 -0800
commitc71155db3f7ad9a1d8b24d6db9e13066fef9f69d (patch)
tree21d25cc2a15c5a6cb0b685503ff0346220579b94 /algorithm.tex
parent98a795015ea6bc565180a5330c0cf1ac36fda3e3 (diff)
downloadkinect-c71155db3f7ad9a1d8b24d6db9e13066fef9f69d.tar.gz
Algorithms
Diffstat (limited to 'algorithm.tex')
-rw-r--r--algorithm.tex8
1 files changed, 4 insertions, 4 deletions
diff --git a/algorithm.tex b/algorithm.tex
index cdc4f09..c7d22ae 100644
--- a/algorithm.tex
+++ b/algorithm.tex
@@ -12,9 +12,9 @@ A mixture of Gaussians \cite{bishop06pattern} is a generative probabilistic mode
P(\bx, y) = \cN(\bx | \bar{\bx}_y, \Sigma) P(y),
\label{eq:mixture of Gaussians}
\end{align}
-where $P(y)$ is the probability of class $y$ and $\cN(\bx | \bar{\bx}_y, \Sigma)$ is a multivariate normal distribution, which is known as a class conditional. The mean of the distribution is $\bar{\bx}_y$ and the variance around $\bar{\bx}_y$ is captured by the covariance matrix $\Sigma$. When all class conditionals have the same covariance matrix $\Sigma$, the decision boundary between any two classes $y$ is linear \cite{bishop06pattern}. In this setting, the mixture of Gaussians model can be viewed as a probabilistic formulation of the nearest-neighbor (NN) classifier from Section~\ref{sec:uniqueness}.
+where $P(y)$ is the probability of class $y$ and $\cN(\bx | \bar{\bx}_y, \Sigma)$ is a multivariate normal distribution, which models the density of $\bx$ given $y$. The mean of the distribution is $\bar{\bx}_y$ and the variance of $\bx$ is captured by the covariance matrix $\Sigma$. The decision boundary between any two classes is known to be is linear when all conditionals $\cN(\bx | \bar{\bx}_y, \Sigma)$ have the same covariance matrix \cite{bishop06pattern}. In this setting, the mixture of Gaussians model can be viewed as a probabilistic variant of the nearest-neighbor (NN) classifier in Section~\ref{sec:uniqueness}.
-The mixture of Gaussians model has many advantages. First, the model can be easily learned using maximum-likelihood (ML) estimation \cite{bishop06pattern}. In particular, $P(y)$ is the frequency of class $y$ in training data, $\bar{\bx}_y$ is the expectation of $\bx$ given $y$, and the covariance matrix $\Sigma$ is estimated as a weighted sum $\Sigma = \sum_y P(y) \Sigma_y$, where $\Sigma_y$ is the covariance matrix corresponding to class $y$. Second, the inference in the model can be performed in a closed form. In particular, the predicted label is given by $\hat{y} = \arg\max_y P(y | \bx)$, where:
+The mixture of Gaussians model has many advantages. First, the model can be easily learned using maximum-likelihood (ML) estimation \cite{bishop06pattern}. In particular, $P(y)$ is the frequency of $y$ in the training set, $\bar{\bx}_y$ is the expectation of $\bx$ given $y$, and the covariance matrix is computed as $\Sigma = \sum_y P(y) \Sigma_y$, where $\Sigma_y$ represents the covariance of $\bx$ given $y$. Second, the inference in the model can be performed in a closed form. In particular, the model predicts $\hat{y} = \arg\max_y P(y | \bx)$, where:
\begin{align}
P(y | \bx) =
\frac{P(\bx | y) P(y)}{\sum_y P(\bx | y) P(y)} =
@@ -23,12 +23,12 @@ The mixture of Gaussians model has many advantages. First, the model can be easi
\end{align}
In practice, the prediction $\hat{y}$ is accepted when the classifier is confident. In other words, $P(\hat{y} | \bx) \! > \! \delta$, where $\delta \in (0, 1)$ is a threshold that controls the precision and recall of the classifier. In general, the higher the threshold $\delta$, the lower the recall and the higher the precision.
-In this work, we use the mixture of Gaussians model for skeleton recognition. Skeleton measurements are represented by a vector $\bx$ and each person is assigned to one class $y$. To verify that our approach is suitable for skeleton recognition, we plot for each skeleton feature (Section~\ref{sec:experiment}) the histogram of differences between the feature and its mean value in the corresponding class (Figure~\ref{fig:marginals}). All distributions look approximately normal. This indicates that the class conditionals $P(\bx | y)$ are multivariate normal and our generative model may be nearly optimal.
+In this work, we use the mixture of Gaussians model for skeleton recognition. Skeleton measurements are represented by a vector $\bx$ and each person is assigned to one class $y$. To verify that our approach is suitable for skeleton recognition, we plot for each skeleton feature $x_k$ (Section~\ref{sec:experiment}) the histogram of differences between the feature and its expectation $(\bar{\bx}_y)_k$ in the corresponding class $y$ (Figure~\ref{fig:marginals}). All histograms look approximately normal. This indicates that all class conditionals $P(\bx | y)$ are multivariate normal and our generative model, although very simple, may be nearly optimal \cite{bishop06pattern}.
\begin{figure}[t]
\centering
\includegraphics[height=4.4in, angle=90, bb=4.5in 1.5in 6.5in 7in]{graphics/Marginals}
- \caption{The histograms of differences between 9 skeleton features (Section~\ref{sec:experiment}) and their mean value for the corresponding person.}
+ \caption{Histograms of differences between 9 skeleton measurements $x_k$ (Section~\ref{sec:experiment}) and their expected value $(\bar{\bx}_y)_k$ in the corresponding class $y$.}
\label{fig:marginals}
\end{figure}