diff options
| author | Jon Whiteaker <jbw@berkeley.edu> | 2012-09-20 19:51:48 -0700 |
|---|---|---|
| committer | Jon Whiteaker <jbw@berkeley.edu> | 2012-10-21 19:48:21 -0700 |
| commit | afe44781966d3a18e96d19759dfa182ebf64100e (patch) | |
| tree | ea75624f5c5ff35f0f9ce2eba2952cd8d99c7a0c /algorithm.tex | |
| parent | fe28145b153df0fa8c216a79c3f1f1d6f30f6078 (diff) | |
| download | kinect-afe44781966d3a18e96d19759dfa182ebf64100e.tar.gz | |
small fixes to algorithms
Diffstat (limited to 'algorithm.tex')
| -rw-r--r-- | algorithm.tex | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/algorithm.tex b/algorithm.tex index ff396eb..faadd9a 100644 --- a/algorithm.tex +++ b/algorithm.tex @@ -56,7 +56,7 @@ observation $\bx$, the model predicts $\hat{y} = \arg\max_y P(y | \bx)$, where: \Sigma)P(y)} \end{equation} In this setting, the decision boundary between two classes $y_1$ and $y_2$: -$\set{\bx | P(\bx, y_1) = P(\bx, y_2)}$ is an hyperplane \cite{bishop06pattern} +$\set{\bx | P(\bx, y_1) = P(\bx, y_2)}$ is a hyperplane \cite{bishop06pattern} and the mixture of Gaussians model can be viewed as a probabilistic variant of the nearest-neighbor (NN) classifier in Section~\ref{sec:uniqueness}. @@ -79,14 +79,14 @@ and the higher the precision. \subsection{Sequential hypothesis testing} \label{sec:SHT} -In our setting (see \xref{sec:experiment-design}), skeletons measurements +In our setting (see \xref{sec:experiment-design}), skeleton measurements are not isolated. On the contrary, everytime a person walks in front of the camera we get a set of time-indexed measurements belonging to the same individual that we want to classify. The mixture of Gaussians model can be extended to temporal inference through through the Sequential hypothesis testing \cite{wald47sequential} framework. In this framework, a subject is -sequentially tested for belonging to one of several class, by assuming that -conditioned on the class, the measurements are independent realisations of the +sequentially tested for belonging to one of several classes, by assuming that +conditioned on the class, the measurements are independent realizations of the same random variable. In our case, the probability that the sequence of data $\bx^{(1)}, \dots, \bx^{(t)}$ belongs to the class $y$ at time $t$ is given by: \begin{equation}\label{eq:SHT} @@ -104,5 +104,5 @@ prediction is accepted when the classifier is confident, that is $P(\hat{y} Sequential hypothesis testing is a common technique for smoothing temporal predictions. In particular, note that the prediction at time $t$ depends on all data up to time $t$. This reduces the variance of predictions, especially when -input data are noisy, such as in the domain of skeleton recognition. +input data is noisy, such as in the domain of skeleton recognition. |
