diff options
| author | unknown <kvetonb@PALOL4KG92Q1.am.thmulti.com> | 2012-03-01 02:24:18 -0800 |
|---|---|---|
| committer | unknown <kvetonb@PALOL4KG92Q1.am.thmulti.com> | 2012-03-01 02:24:18 -0800 |
| commit | d2f62d848bf0f499432446acb6ae118a2d656f60 (patch) | |
| tree | 93a47424abf01a566dd74e45ca722f1efa3618be | |
| parent | 007b83e185a348108af232bf8752722a4e46796b (diff) | |
| download | kinect-d2f62d848bf0f499432446acb6ae118a2d656f60.tar.gz | |
Algorithm
| -rw-r--r-- | algorithm.tex | 6 | ||||
| -rw-r--r-- | utils.tex | 112 |
2 files changed, 115 insertions, 3 deletions
diff --git a/algorithm.tex b/algorithm.tex index 15eec4b..c9130cc 100644 --- a/algorithm.tex +++ b/algorithm.tex @@ -9,10 +9,10 @@ Provide a guideline for the rest of the section. The nearest-neighbor (NN) classifier worked well on the dead body dataset. It is a well-know fact that the decision boundary of the NN classifier is piecewise linear [cite]. A mixture of Gaussians [cite] is a generative model that can be used for NN classification. In our domain, each person is represented by a single Gaussian, which is centered in the person's mean profile. All Gaussians have the same covariance matrix, which encodes the variance and covariance of attributes around the mean profiles. In particular, our probabilistic model is given by: \begin{align} - P(\bx, y) = N(\bx | \bu_y, \Sigma) P(y), + P(\bx, y) = \cN(\bx | \bu_y, \Sigma) P(y), \label{eq:mixture of Gaussians} \end{align} -where $P(y)$ is the probability that the person y appears in front of the camera and $N(\bx | \bu_y, \Sigma)$ is the probability that the profile $\bx$ belongs to the person $y$. The probability is modeled as a normal distribution, which is centered in the mean profile of the person $y$ and has a covariance matrix $\Sigma$. Since all Gaussians in the mixture have the same covariance matrix, all decision boundaries, $P(\bx, y_1) = P(\bx, y_2)$ for some $y_1$ and $y_2$, are linear. +where $P(y)$ is the probability that the person y appears in front of the camera and $\cN(\bx | \bu_y, \Sigma)$ is the probability that the profile $\bx$ belongs to the person $y$. The probability is modeled as a normal distribution, which is centered in the mean profile of the person $y$ and has a covariance matrix $\Sigma$. Since all Gaussians in the mixture have the same covariance matrix, all decision boundaries, $P(\bx, y_1) = P(\bx, y_2)$ for some $y_1$ and $y_2$, are linear. The parameters of our model can be easily learned using maximum-likelihood (ML) estimation [cite]. The probability $P(y)$ is the fraction of time when the person $y$ appears in training set. The mean profile $\bu_y$ is estimated as the mean of all profiles corresponding to the person $y$. The covariance matrix $\Sigma$ is estimated as $\Sigma = \sum_y P(y) \Sigma_y$, where $\Sigma_y$ is the covariance matrix of profiles for the person $y$. @@ -20,7 +20,7 @@ The inference in our model can be done efficiently. In particular, note that: \begin{align} P(y | \bx) = \frac{P(\bx | y) P(y)}{\sum_y P(\bx | y) P(y)} = - \frac{N(\bx | \bu_y, \Sigma) P(y)}{\sum_y N(\bx | \bu_y, \Sigma) P(y)}. + \frac{\cN(\bx | \bu_y, \Sigma) P(y)}{\sum_y \cN(\bx | \bu_y, \Sigma) P(y)}. \label{eq:inference} \end{align} @@ -102,4 +102,116 @@ \newcommand{\ccg}{\cellcolor[gray]{0.9}} % requires \usepackage{colortbl}
\newcommand{\tcg}[1]{\textcolor[gray]{0.5}{#1}}
+\newcommand{\commentout}[1]{}
+\newcommand{\ba}{{\bf a}}
+\newcommand{\bA}{{\bf A}}
+\newcommand{\bb}{{\bf b}}
+\newcommand{\bB}{{\bf B}}
+\newcommand{\bc}{{\bf c}}
+\newcommand{\bC}{{\bf C}}
+\newcommand{\bd}{{\bf d}}
+\newcommand{\bD}{{\bf D}}
+\newcommand{\be}{{\bf e}}
+\newcommand{\bE}{{\bf E}}
+\newcommand{\bh}{{\bf h}}
+\newcommand{\bH}{{\bf H}}
+\newcommand{\bi}{{\bf i}}
+\newcommand{\bI}{{\bf I}}
+\newcommand{\bM}{{\bf M}}
+\newcommand{\bs}{{\bf s}}
+\newcommand{\bS}{{\bf S}}
+\newcommand{\bu}{{\bf u}}
+\newcommand{\bU}{{\bf U}}
+\newcommand{\bv}{{\bf v}}
+\newcommand{\bV}{{\bf V}}
+\newcommand{\bw}{{\bf w}}
+\newcommand{\bwbar}{\overline{\bw}}
+\newcommand{\bwhat}{\widehat{\bw}}
+\newcommand{\bwstar}{\bw^\ast}
+\newcommand{\bwtilde}{\widetilde{\bw}}
+\newcommand{\bW}{{\bf W}}
+\newcommand{\bx}{{\bf x}}
+\newcommand{\bX}{{\bf X}}
+\newcommand{\by}{{\bf y}}
+\newcommand{\bY}{{\bf Y}}
+\newcommand{\bz}{{\bf z}}
+\newcommand{\bZ}{{\bf Z}}
+\newcommand{\balpha}{{\bm \alpha}}
+\newcommand{\bell}{{\bm \ell}}
+\newcommand{\cA}{\mathcal{A}}
+\newcommand{\cC}{\mathcal{C}}
+\newcommand{\cD}{\mathcal{D}}
+\newcommand{\cE}{\mathcal{E}}
+\newcommand{\cF}{\mathcal{F}}
+\newcommand{\cG}{\mathcal{G}}
+\newcommand{\cH}{\mathcal{H}}
+\newcommand{\cL}{\mathcal{L}}
+\newcommand{\cM}{\mathcal{M}}
+\newcommand{\cN}{\mathcal{N}}
+\newcommand{\cO}{\mathcal{O}}
+\newcommand{\cP}{\mathcal{P}}
+\newcommand{\cS}{\mathcal{S}}
+\newcommand{\cT}{\mathcal{T}}
+\newcommand{\cU}{\mathcal{U}}
+\newcommand{\cX}{\mathcal{X}}
+\newcommand{\cY}{\mathcal{Y}}
+\newcommand{\cZ}{\mathcal{Z}}
+\newcommand{\eps}{\varepsilon}
+\newcommand{\pistar}{\pi^\ast}
+\newcommand{\Qpi}{Q^\pi}
+\newcommand{\Qstar}{Q^\ast}
+\newcommand{\Vhat}{\widehat{V}}
+\newcommand{\Vpi}{V^\pi}
+\newcommand{\Vbw}{V^\bw}
+\newcommand{\Vbwbar}{V^{\bwbar}}
+\newcommand{\Vbwhat}{V^{\bwhat}}
+\newcommand{\Vbwstar}{V^{\bwstar}}
+\newcommand{\Vbwtilde}{V^{\bwtilde}}
+\newcommand{\Vstar}{V^\ast}
+\newcommand{\wbar}{\overline{w}}
+\newcommand{\what}{\widehat{w}}
+\newcommand{\wstar}{w^\ast}
+\newcommand{\wtilde}{\widetilde{w}}
+
+\newcommand{\integerset}{\mathbb{Z}}
+\newcommand{\naturalset}{\mathbb{N}}
+\newcommand{\realset}{\mathbb{R}}
+
+\newcommand{\betapdf}{P_{\mathrm{beta}}}
+\newcommand{\betacdf}{F_{\mathrm{beta}}}
+\newcommand{\gammapdf}{P_{\mathrm{gamma}}}
+\newcommand{\gammacdf}{F_{\mathrm{gamma}}}
+\newcommand{\normalpdf}{\cN}
+\newcommand{\normalcdf}{F_{\cN}}
+\newcommand{\unifpdf}[2]{\mathrm{U}_{[#1, #2]}}
+\newcommand{\unifcdf}[2]{F_{\mathrm{U}_{[#1, #2]}}}
+
+\newcommand{\convexhull}[1]{\mathrm{Conv\left[#1\right]}}
+\newcommand{\domain}[1]{\mathrm{Dom\left[#1\right]}}
+\newcommand{\range}[1]{\mathrm{Rng\left[#1\right]}}
+\newcommand{\Parents}{\mathsf{Par}}
+
+\newcommand{\lyapunov}{L}
+\newcommand{\lyapunovfactor}{\kappa}
+
+\newcommand{\abs}[1]{\left|#1\right|}
+\newcommand{\ceils}[1]{\left\lceil#1\right\rceil}
+\newcommand{\E}[2]{\mathbb{E}_{#1} \! \left[#2\right]}
+\newcommand{\Eabs}[2]{\mathbb{E}_{#1} \! \abs{#2}}
+\newcommand{\floors}[1]{\left\lfloor#1\right\rfloor}
+\newcommand{\I}[1]{\mathds{1} \! \left\{#1\right\}}
+\newcommand{\intin}[2]{\int_{#1} \! \! \! #2 \ud #1}
+\newcommand{\maxnorm}[1]{\left\|#1\right\|_\infty}
+\newcommand{\maxnormw}[2]{\left\|#1\right\|_{\infty, #2}}
+\newcommand{\mode}[1]{\widehat{#1}}
+\renewcommand{\neg}[1]{\overline{#1}}
+\newcommand{\negpart}[1]{\left[#1\right]^-}
+\newcommand{\normw}[2]{\left\|#1\right\|_{#2}}
+\newcommand{\pospart}[1]{\left[#1\right]^+}
+\newcommand{\set}[1]{\left\{#1\right\}}
+\newcommand{\sgn}{\mathrm{sgn}}
+\newcommand{\subst}[2]{\left\{#1 = #2\right\}}
+\newcommand{\transpose}{^\mathsf{\scriptscriptstyle T}}
+\newcommand{\ud}{\, \mathrm{d}}
+\newcommand{\var}[2]{\mathrm{var}_{#1} \! \left[#2\right]}
|
