summaryrefslogtreecommitdiffstats
path: root/experimental.tex
diff options
context:
space:
mode:
Diffstat (limited to 'experimental.tex')
-rw-r--r--experimental.tex244
1 files changed, 138 insertions, 106 deletions
diff --git a/experimental.tex b/experimental.tex
index 3c6547b..b30ba10 100644
--- a/experimental.tex
+++ b/experimental.tex
@@ -29,8 +29,7 @@ laboratory setting. The Kinect is placed at the tee of a well traversed
hallway. The view of the Kinect is seen in \fref{fig:hallway}, showing the
color image, the depth image, and the fitted skeleton of a person in a single
frame. For each frame where a person is detected and a skeleton is fitted we
-collect the 3D coordinates of 20 body joints, and the color image recorded by
-the RGB camera.
+capture the 3D coordinates of 20 body joints, and the color image.
\begin{figure}[t]
\begin{center}
@@ -57,19 +56,22 @@ is increasing.
\subsection{Experiment design}
\label{sec:experiment-design}
-Several reductions are then applied to the data set to extract \emph{features}
+We preprocess the data set to extract \emph{features}
from the raw data. First, the lengths of 15 body parts are computed from the
joint coordinates. These are distances between two contiguous joints in the
human body. If one of the two joints of a body part is not present or inferred
in a frame, the corresponding body part is reported as absent for the frame.
-Second, the number of features is reduced to 9 by using the vertical symmetry
+Second, we reduce the number of features to nine by using the vertical symmetry
of the human body: if two body parts are symmetric about the vertical axis, we
bundle them into one feature by averaging their lengths. If only one of them is
-present, we take the value of its counterpart. If none of them are present, the
-feature is reported as missing for the frame. The resulting nine features are:
-Head-ShoulderCenter, ShoulderCenter-Shoulder, Shoulder-Elbow, Elbow-Wrist,
-ShoulderCenter-Spine, Spine-HipCenter, HipCenter-HipSide, HipSide-Knee,
-Knee-Ankle. Finally, any frame with a missing feature is filtered out.
+present, we take its value. If neither of them is present, the feature is
+reported as missing for the frame. The resulting nine features include the six
+arm, leg, and pelvis measurements from \xref{sec:uniqueness}, and three
+additional measurements: spine length, shoulder breadth, and head size.
+Finally, any frame with a missing feature is filtered out.
+%The resulting nine features are: Head-ShoulderCenter, ShoulderCenter-Shoulder,
+%Shoulder-Elbow, Elbow-Wrist, ShoulderCenter-Spine, Spine-HipCenter,
+%HipCenter-HipSide, HipSide-Knee, Knee-Ankle.
Each detected skeleton also has an ID number which identifies the figure it
maps to from the figure detection stage. When there are consecutive frames with
@@ -77,35 +79,35 @@ the same ID, it means that the skeleton-fitting algorithm was able to detect
the skeleton in a contiguous way. This allows us to define the concept of a
\emph{run}: a sequence of frames with the same skeleton ID.
-\begin{table}
-\begin{center}
-\caption{Data set statistics. The right part of the table shows the
-average numbers for different intervals of $k$, the rank of a person
-in the ordering given by the number of frames}
-\label{tab:dataset}
-\begin{tabular}{|l|r||r|r|r|}
-\hline
-Number of people & 25 & $k\leq 5$ & $5\leq k\leq 20$ & $k\geq 20$\\
-\hline
-Number of frames & 15945 & 1211 & 561 & 291 \\
-\hline
-Number of runs & 244 & 18 & 8 & 4\\
-\hline
-\end{tabular}
-\end{center}
-\end{table}
+%\begin{table}
+%\begin{center}
+%\caption{Data set statistics. The right part of the table shows the
+%average numbers for different intervals of $k$, the rank of a person
+%in the ordering given by the number of frames}
+%\label{tab:dataset}
+%\begin{tabular}{|l|r||r|r|r|}
+%\hline
+%Number of people & 25 & $k\leq 5$ & $5\leq k\leq 20$ & $k\geq 20$\\
+%\hline
+%Number of frames & 15945 & 1211 & 561 & 291 \\
+%\hline
+%Number of runs & 244 & 18 & 8 & 4\\
+%\hline
+%\end{tabular}
+%\end{center}
+%\end{table}
\begin{figure}[t]
\begin{center}
- \includegraphics[width=0.80\textwidth]{graphics/frames.pdf}
+ \includegraphics[width=0.49\textwidth]{graphics/frames.pdf}
\end{center}
\caption{Distribution of the frame ratio of each individual in the
data set}
+ \label{fig:frames}
\end{figure}
-\subsection{Results}
+\subsection{Offline learning setting}
-\paragraph{Offline setting.}
The mixture of Gaussians model is evaluated on the whole dataset by
doing 10-fold cross validation: the data set is partitioned into 10
@@ -115,7 +117,7 @@ repeated for the 10 possible testing subsample. Averaging the
prediction rate over these 10 training-testing experiments yields the
prediction rate for the chosen threshold.
-Figure \ref{fig:mixture} shows the precision-recall plot as the
+\fref{fig:offline} shows the precision-recall plot as the
threshold varies. Several curves are obtained for different group
sizes: people are ordered based on their numbers of frames, and all
the frames belonging to someone beyond a given rank in this ordering
@@ -124,20 +126,37 @@ increasing the number of people in the data set can be explained
by the overlaps between skeleton profiles due to the noise, as
discussed in Section~\ref{sec:uniqueness}, but also by the very few
number of runs available for the least present people, as seen in
-Table~\ref{tab:dataset}, which does not permit a proper training of
+\fref{fig:frames}, which does not permit a proper training of
the algorithm.
-\begin{figure}[t]
- \begin{center}
- \includegraphics[width=0.80\textwidth]{graphics/10fold-naive.pdf}
- \end{center}
- \caption{Precision-Recall curve for the mixture of Gaussians model
+\begin{figure*}[t]
+\begin{center}
+\subfloat[Mixture of Gaussians]{
+ \includegraphics[width=0.49\textwidth]{graphics/offline-nb.pdf}
+ \label{fig:offline:nb}
+}
+\subfloat[Sequential Hypothesis Learning]{
+ \includegraphics[width=0.49\textwidth]{graphics/offline-sht.pdf}
+ \label{fig:offline:sht}
+}
+ \caption{Precision-recall curve for the mixture of Gaussians model
with 10-fold cross validation. The data set is restricted to the top
- $n$ most present people}
- \label{fig:mixture}
-\end{figure}
+ $n_p$ most present people}
+\label{fig:offline}
+\end{center}
+\end{figure*}
+
+%\begin{figure}[t]
+% \begin{center}
+% \includegraphics[width=0.80\textwidth]{graphics/10fold-naive.pdf}
+% \end{center}
+% \caption{Precision-Recall curve for the mixture of Gaussians model
+% with 10-fold cross validation. The data set is restricted to the top
+% $n$ most present people}
+% \label{fig:mixture}
+%\end{figure}
-\paragraph{Online setting.}
+\subsection{Online learning setting}
Even though the previous evaluation is standard, it does not properly
reflect the reality. A real-life setting could be the following: the
@@ -154,114 +173,127 @@ run. The analysis is therefore performed by partitioning the dataset
into 10 subsamples of equal size. For a given threshold, the algorithm
is trained and tested incrementally: trained on the first $k$
subsamples (in the chronological order) and tested on the $(k+1)$-th
-subsample. Figure~\ref{fig:sequential} shows the prediction-recall
+subsample. \fref{fig:online} shows the prediction-recall
curve when averaging the prediction rate of the 10 incremental
experiments.
-\begin{figure}[t]
- \begin{center}
- \includegraphics[width=0.80\textwidth]{graphics/online-sht.pdf}
- \end{center}
- \caption{Precision-Recall curve for the sequential hypothesis
- testing algorithm in the online setting. $n$ is the size of the
- group as in Figure~\ref{fig:mixture}}
- \label{fig:sequential}
-\end{figure}
+\begin{figure*}[t]
+\begin{center}
+\subfloat[Mixture of Gaussians]{
+ \includegraphics[width=0.49\textwidth]{graphics/online-nb.pdf}
+ \label{fig:online:nb}
+}
+\subfloat[Sequential Hypothesis Learning]{
+ \includegraphics[width=0.49\textwidth]{graphics/online-sht.pdf}
+ \label{fig:online:sht}
+}
+\caption{Precision-recall curves for the online setting. $n_p$ is the size of
+the group as in Figure~\ref{fig:offline}}
+\label{fig:online}
+\end{center}
+\end{figure*}
-\paragraph{Face recognition.}
+\subsection{Face recognition}
-We then compare the performance of skeleton recognition with the
-performance of face recognition as given by \textsf{face.com}
-\todo{REFERENCE NEEDED}. At the time of writing, this is the best
-performing face recognition algorithm on the LFW data set
-\cite{face-com}.
+We then compare the performance of skeleton recognition with the performance of
+face recognition as given by \textsf{face.com}. At the time of writing, this
+is the best performing face recognition algorithm on the LFW data set
+~\cite{face-com}.
We use the publicly available REST API of \textsf{face.com} to do face
-recognition on our data set: the training is done on half of the data
-and the testing is done on the remaining half. For comparison, the
-Gaussian mixture algorithm is run with the same training-testing
-partitioning of the data set. In this setting, the Sequential
-Hypothesis Testing algorithm is not relevant for the comparison,
-because \textsf{face.com} does not give the possibility to mark a
-sequence of frames as belonging to the same run. This additional
-information would be used by the SHT algorithm and would thus bias the
-results in favor of skeleton recognition.
+recognition on our data set. Due to the restrictions of the API, for this
+experiment we train on one half of the data and test on the remaining half. For
+comparison, the Gaussian mixture algorithm is run with the same
+training-testing partitioning of the data set. In this setting, the Sequential
+Hypothesis Testing algorithm is not relevant for the comparison, because
+\textsf{face.com} does not give the possibility to mark a sequence of frames as
+belonging to the same run. This additional information would be used by the SHT
+algorithm and would thus bias the results in favor of skeleton recognition.
\begin{figure}[t]
+\parbox[t]{0.49\linewidth}{
\begin{center}
- \includegraphics[width=0.80\textwidth]{graphics/face.pdf}
+ \includegraphics[width=0.49\textwidth]{graphics/face.pdf}
\end{center}
- \caption{Precision-Recall curve for face recognition and skeleton recognition}
+ \caption{Precision-recall curve for face recognition and skeleton recognition}
\label{fig:face}
-\end{figure}
-
-\paragraph{People walking away from the camera.}
-
-The performance of face recognition and skeleton recognition are
-comparable in the previous setting \todo{is that really
-true?}. However, there are many cases where only skeleton recognition
-is possible. The most obvious one is when people are walking away from
-the camera. Coming back to the raw data collected during the
-experiment design, we manually label the runs of people walking away
-from the camera. In this case, it is harder to get the ground truth
-classification and some of runs are dropped because it is not possible
-to recognize the person. Apart from that, the data set reduction is
-performed exactly as explained in Section~\ref{sec:experiment-design}.
-
-\begin{figure}[t]
+}
+\parbox[t]{0.49\linewidth}{
\begin{center}
- \includegraphics[width=0.80\textwidth]{graphics/back.pdf}
+ \includegraphics[width=0.49\textwidth]{graphics/back.pdf}
\end{center}
- \caption{Precision-Recall curve for the sequential hypothesis
- testing algorithm in the online setting with people walking away
- from and toward the camera. All the people are included}
+ \caption{Precision-recall curve
+ with people walking away
+ from and toward the camera}
\label{fig:back}
+}
\end{figure}
-Figure~\ref{fig:back} compares the curve obtained in the online
+\subsection{Walking away}
+
+The performance of face recognition and skeleton recognition are comparable in
+the previous setting. However, there are many cases where only skeleton
+recognition is possible. The most obvious one is when people are walking away
+from the camera. Coming back to the raw data collected during the experiment
+design, we manually label the runs of people walking away from the camera. In
+this case, it is harder to get the ground truth classification and some of runs
+are dropped because it is not possible to recognize the person. Apart from
+that, the data set reduction is performed exactly as explained in
+Section~\ref{sec:experiment-design}.
+
+%\begin{figure}[t]
+% \begin{center}
+% \includegraphics[width=0.80\textwidth]{graphics/back.pdf}
+% \end{center}
+% \caption{Precision-Recall curve for the sequential hypothesis
+% testing algorithm in the online setting with people walking away
+% from and toward the camera. All the people are included}
+% \label{fig:back}
+%\end{figure}
+
+\fref{fig:back} compares the curve obtained in the online
setting with people walking toward the camera, with the curve obtained
by running the same experiment on the data set of runs of people
walking away from the camera. The two curves are sensibly the
same. However, one could argue that as the two data sets are
completely disjoint, the SHT algorithm is not learning the same
profile for a person walking toward the camera and for a person
-walking away from the camera. Figure~\ref{fig:back2} shows the
-Precision-Recall curve when training on runs toward the camera and
+walking away from the camera. \fref{fig:back} shows the
+Precision-recall curve when training on runs toward the camera and
testing on runs away from the camera.
-\todo{PLOT NEEDED}
+\subsection{Reducing the noise}
-\paragraph{Reducing the noise.} Predicting potential improvements of
-the prediction rate of our algorithm is straightforward. The algorithm
-relies on 9 features only. Section~\ref{sec:uniqueness} shows that
-6 of these features alone are sufficient to perfectly distinguish two
-different skeletons at a low noise level. Therefore, the only source
-of classification error in our algorithm is the dispersion of the
-observed limbs' lengths away from the exact measurements.
+Predicting potential improvements of the prediction rate of our algorithm is
+straightforward. The algorithm relies on 9 features only.
+\xref{sec:uniqueness} shows that 6 of these features alone are
+sufficient to perfectly distinguish two different skeletons at a low noise
+level. Therefore, the only source of classification error in our algorithm is
+the dispersion of the observed limbs' lengths away from the exact measurements.
To simulate a possible reduction of the noise level, the data set is
modified as follows: all the observations for a given person are
homothetically contracted towards their average so as to divide their
-empirical variance by 2. Formally, if $x$ is an observation in the
-9-dimensional feature space for the person $i$, and if $\bar{x}$ is
+empirical variance by 2. Formally, if $\bx$ is an observation in the
+9-dimensional feature space for the person $i$, and if $\bar{\bx}$ is
the average of all the observations available for this person in the
-data set, then $x$ is replaced by $x'$ defined by:
+data set, then $\bx$ is replaced by $\bx'$ defined by:
\begin{equation}
- x' = \bar{x} + \frac{x-\bar{x}}{\sqrt{2}}
+ \bx' = \bar{\bx} + \frac{\bx-\bar{\bx}}{\sqrt{2}}
\end{equation}
We believe that a reducing factor of 2 for the noise's variance is
realistic given the relative low resolution of the Kinect's infrared
camera.
-Figure~\ref{fig:var} compares the Precision-Recall curve of
-Figure~\ref{fig:sequential} to the curve of the same experiment run on
+\fref{fig:var} compares the Precision-recall curve of
+\fref{fig:sequential} to the curve of the same experiment run on
the newly obtained data set.
\begin{figure}[t]
\begin{center}
- \includegraphics[width=0.80\textwidth]{graphics/var.pdf}
+ \includegraphics[width=0.49\textwidth]{graphics/var.pdf}
\end{center}
- \caption{Precision-Recall curve for the sequential hypothesis
+ \caption{Precision-recall curve for the sequential hypothesis
testing algorithm in the online setting for all the people with and
without halving the variance of the noise}
\label{fig:var}