summaryrefslogtreecommitdiffstats
path: root/algorithm.tex
diff options
context:
space:
mode:
authorunknown <Brano@Toshibicka.(none)>2012-03-04 13:13:35 -0800
committerunknown <Brano@Toshibicka.(none)>2012-03-04 13:13:35 -0800
commit88f7c5cc1cabcade583c41c55d9df3e5d8cab300 (patch)
treeff34e88cf1d6b48c961e0680cd6d9ea4246b6f67 /algorithm.tex
parentc71155db3f7ad9a1d8b24d6db9e13066fef9f69d (diff)
downloadkinect-88f7c5cc1cabcade583c41c55d9df3e5d8cab300.tar.gz
Algorithms
Diffstat (limited to 'algorithm.tex')
-rw-r--r--algorithm.tex8
1 files changed, 4 insertions, 4 deletions
diff --git a/algorithm.tex b/algorithm.tex
index c7d22ae..fbe2dc8 100644
--- a/algorithm.tex
+++ b/algorithm.tex
@@ -23,13 +23,13 @@ The mixture of Gaussians model has many advantages. First, the model can be easi
\end{align}
In practice, the prediction $\hat{y}$ is accepted when the classifier is confident. In other words, $P(\hat{y} | \bx) \! > \! \delta$, where $\delta \in (0, 1)$ is a threshold that controls the precision and recall of the classifier. In general, the higher the threshold $\delta$, the lower the recall and the higher the precision.
-In this work, we use the mixture of Gaussians model for skeleton recognition. Skeleton measurements are represented by a vector $\bx$ and each person is assigned to one class $y$. To verify that our approach is suitable for skeleton recognition, we plot for each skeleton feature $x_k$ (Section~\ref{sec:experiment}) the histogram of differences between the feature and its expectation $(\bar{\bx}_y)_k$ in the corresponding class $y$ (Figure~\ref{fig:marginals}). All histograms look approximately normal. This indicates that all class conditionals $P(\bx | y)$ are multivariate normal and our generative model, although very simple, may be nearly optimal \cite{bishop06pattern}.
+In this work, we use the mixture of Gaussians model for skeleton recognition. Skeleton measurements are represented by a vector $\bx$ and each person is assigned to one class $y$. In particlar, our dataset $\cD = \set{(\bx_1, y_1), \dots, (\bx_n, y_n)}$ consists of $n$ pairs $(\bx_i, y_i)$, where $y_i$ is the label of the skeleton $\bx_i$. To verify that our method is suitable for skeleton recognition, we plot for each skeleton feature $x_k$ (Section~\ref{sec:experiment}) the histogram of differences between all measurements and the expectation given the class $(\bx_i)_k - \E{}{x_k | y_i}$ (Figure~\ref{fig:error marginals}). All histograms look approximately normal. This indicates that all class conditionals $P(\bx | y)$ are multivariate normal and our generative model, although very simple, may be nearly optimal \cite{bishop06pattern}.
\begin{figure}[t]
\centering
- \includegraphics[height=4.4in, angle=90, bb=4.5in 1.5in 6.5in 7in]{graphics/Marginals}
- \caption{Histograms of differences between 9 skeleton measurements $x_k$ (Section~\ref{sec:experiment}) and their expected value $(\bar{\bx}_y)_k$ in the corresponding class $y$.}
- \label{fig:marginals}
+ \includegraphics[height=4.4in, angle=90, bb=4.5in 1.5in 6.5in 7in]{graphics/ErrorMarginals}
+ \caption{Histograms of differences between 9 skeleton measurements $x_k$ (Section~\ref{sec:experiment}) and their expectation given the class $y$.}
+ \label{fig:error marginals}
\end{figure}