diff options
| author | unknown <Brano@Toshibicka.(none)> | 2012-03-05 12:00:46 -0800 |
|---|---|---|
| committer | unknown <Brano@Toshibicka.(none)> | 2012-03-05 12:00:46 -0800 |
| commit | 27b82b9eaf26474a346638a03f7edfd2f79740b4 (patch) | |
| tree | 4108865353227219d8a4f2949cbc4986f9f633ad | |
| parent | 4f4d6896872f30bf36bba8975aca61ddf3cda156 (diff) | |
| download | kinect-27b82b9eaf26474a346638a03f7edfd2f79740b4.tar.gz | |
Algorithms
| -rw-r--r-- | algorithm.tex | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/algorithm.tex b/algorithm.tex index 30ce77c..3045929 100644 --- a/algorithm.tex +++ b/algorithm.tex @@ -20,7 +20,7 @@ distribution of the model is given by: P(\bx, y) = \cN(\bx | \bar{\bx}_y, \Sigma) P(y), \label{eq:mixture of Gaussians} \end{align} -where $\bx$ denotes an observation, $y$ is a class, $P(y)$ is the probability of the class, and $\cN(\bx | \bar{\bx}_y, \Sigma)$ is the conditional probability of $\bx$ given $y$. The conditional is a multivariate normal distribution. Its mean is $\bar{\bx}_y$ and the variance of $\bx$ is captured by the covariance matrix $\Sigma$. The decision boundary between any two classes $y$ is linear when all conditionals $\cN(\bx | \bar{\bx}_y, \Sigma)$ have the same covariance matrix $\Sigma$ \cite{bishop06pattern}. In this scenario, the mixture of Gaussians model can be viewed as a probabilistic variant of the nearest-neighbor (NN) classifier in Section~\ref{sec:uniqueness}. +where $\bx$ denotes an observation, $y$ is a class, $P(y)$ is the probability of the class, and $\cN(\bx | \bar{\bx}_y, \Sigma)$ is the conditional probability of $\bx$ given $y$. The conditional is a multivariate normal distribution. Its mean is $\bar{\bx}_y$ and the variance of $\bx$ is captured by the covariance matrix $\Sigma$. The decision boundary between two classes $y_1$ and $y_2$, $P(\bx, y_1) = P(\bx, y_2)$ for all $\bx$, is linear when all conditionals $\cN(\bx | \bar{\bx}_y, \Sigma)$ have the same covariance matrix $\Sigma$ \cite{bishop06pattern}. In this case, the mixture of Gaussians model can be viewed as a probabilistic variant of the nearest-neighbor (NN) classifier in Section~\ref{sec:uniqueness}. The mixture of Gaussians model has many advantages. First, the model can be easily learned using maximum-likelihood (ML) estimation \cite{bishop06pattern}. |
