\section{Real-World Evaluation} \label{sec:experiment} We conduct a real-life uncontrolled experiment using the Kinect to test to the algorithm. First we describe our approach to data collection. Second we describe how the data is processed and classified. Finally, we discuss the results. \subsection{Dataset} The Kinect outputs three primary signals in real-time: a color image stream, a depth image stream, and microphone output (\fref{fig:hallway}). For our purposes, we focus on the depth image stream. As the Kinect was designed to interface directly with the Xbox 360, the tools to interact with it on a PC are limited. The OpenKinect project released libfreenect~\cite{libfreenect}, a reverse engineered driver which gives access to the raw depth images of the Kinect. This raw data could be used to implement skeleton fitting algorithms, \eg those of Plagemann~\etal{}~\cite{plagemann:icra10}. Alternatively, OpenNI~\cite{openni}, an open framework led by PrimeSense, the company behind the technology of the Kinect, offers figure detection and skeleton fitting algorithms on top of raw access to the data streams. More recently, the Kinect for Windows SDK~\cite{kinect-sdk} was released, and its skeleton fitting algorithm operates in real-time without calibration. Prior to the release of the Kinect SDK, we experimented with using OpenNI for skeleton recognition with positive results. Unfortunately, the skeleton fitting algorithm of OpenNI requires each individual to strike a specific pose for calibration, making it more difficult to collect a lot of data. Upon the release of the Kinect SDK, we selected it to perform our data collection, given that it is the state-of-the-art and does not require calibration. We collect data using the Kinect SDK over a period of a week in a research laboratory setting. The Kinect is placed at the tee of a well traversed hallway. The view of the Kinect is seen in \fref{fig:hallway}, showing the color image, the depth image, and the fitted skeleton of a person in a single frame. For each frame where a person is detected and a skeleton is fitted we capture the 3D coordinates of 20 body joints, and the color image. \begin{figure}[t] \begin{center} \includegraphics[width=0.99\textwidth]{graphics/hallway.png} \end{center} \vspace{-\baselineskip} \caption{Experiment setting. Color image, depth image, and fitted skeleton as captured by the Kinect in a single frame} \label{fig:hallway} \end{figure} For some frames, one or several joints are out of the frame or are occluded by another part of the body. In those cases, the coordinates of these joints are either absent from the frame or present but tagged as \emph{Inferred} by the Kinect SDK. Inferred means that even though the joint is not visible in the frame, the skeleton-fitting algorithm attempts to guess the right location. Ground truth person identification is obtained by manually labelling each run based on the images captured by the RGB camera of the Kinect. For ease of labelling, only the runs with people walking toward the camera are kept. These are the runs where the average distance from the skeleton joints to the camera is increasing. \subsection{Experiment design} \label{sec:experiment-design} We preprocess the data set to extract \emph{features} from the raw data. First, the lengths of 15 body parts are computed from the joint coordinates. These are distances between two contiguous joints in the human body. If one of the two joints of a body part is not present or inferred in a frame, the corresponding body part is reported as absent for the frame. Second, we reduce the number of features to nine by using the vertical symmetry of the human body: if two body parts are symmetric about the vertical axis, we bundle them into one feature by averaging their lengths. If only one of them is present, we take its value. If neither of them is present, the feature is reported as missing for the frame. Finally, any frame with a missing feature is filtered out. The resulting nine features include the six arm, leg, and pelvis measurements from \xref{sec:uniqueness}, and three additional measurements: spine length, shoulder breadth, and head size. Here we list the nine features as pairs of joints: %The resulting nine features are: Head-ShoulderCenter, ShoulderCenter-Shoulder, %Shoulder-Elbow, Elbow-Wrist, ShoulderCenter-Spine, Spine-HipCenter, %HipCenter-HipSide, HipSide-Knee, Knee-Ankle. \vspace{-1.5\baselineskip} \begin{table} \begin{center} \begin{tabular}{ll} Head-ShoulderCenter & Spine-HipCenter\\ ShoulderCenter-Shoulder & HipCenter-Hip\\ Shoulder-Elbow & Hip-Knee\\ Elbow-Wrist & Knee-Ankle\\ ShoulderCenter-Spine &\\ \end{tabular} \end{center} \end{table} \vspace{-2.5\baselineskip} Each detected skeleton also has an ID number which identifies the figure it maps to from the figure detection stage. When there are consecutive frames with the same ID, it means that the skeleton-fitting algorithm was able to detect the skeleton in a contiguous way. This allows us to define the concept of a \emph{run}: a sequence of frames with the same skeleton ID. We perform five experiments. First, we test the performance of skeleton recognition using traditional 10-fold cross validation, to represent an offline setting. Second, we run our algorithms in an online setting by training and testing the data over time. Third, we pit skeleton recognition against the state-of-the-art in face recognition. Next, we test how our solution performs when people are walking away from the camera. Finally, we study what happens if the noise from the Kinect is reduced. %\begin{table} %\begin{center} %\caption{Data set statistics. The right part of the table shows the %average numbers for different intervals of $k$, the rank of a person %in the ordering given by the number of frames} %\label{tab:dataset} %\begin{tabular}{|l|r||r|r|r|} %\hline %Number of people & 25 & $k\leq 5$ & $5\leq k\leq 20$ & $k\geq 20$\\ %\hline %Number of frames & 15945 & 1211 & 561 & 291 \\ %\hline %Number of runs & 244 & 18 & 8 & 4\\ %\hline %\end{tabular} %\end{center} %\end{table} \begin{figure}[t] \begin{center} \includegraphics[]{graphics/frames.pdf} \end{center} \vspace{-1.5\baselineskip} \caption{Distribution of the frame ratio of each individual in the data set} \label{fig:frames} \end{figure} \subsection{Offline learning setting} \label{sec:experiment:offline} In the first experiment, we study the accuracy of skeleton recognition using 10-fold cross validation. The data set is partitioned into 10 continuous time sequences of equal size. For a given recall threshold, the algorithm is trained on 9 continuous time sequences and trained on the last one. This is repeated for the 10 possible testing subsamples. Averaging the prediction rate over these 10 training-testing experiments yields the prediction rate for the chosen threshold. We test the mixture of Gaussians (MoG) and sequential hypothesis testing (SHT) models, and find that SHT generally performs better than MoG, and that accuracy increases as group size decreases. \fref{fig:offline} shows the precision-recall plot as the threshold varies. Both algrithms perform better than three times the majority class baseline of 15\% with a recall of 100\% on all people. Several curves are obtained for different group sizes: people are ordered based on their frequency of appearance (\fref{fig:frames}, and all the frames belonging to people beyond a given rank in this ordering are removed. The decrease of performance when increasing the number of people in the data set can be explained by the overlaps between skeleton profiles due to the noise, as discussed in Section~\ref{sec:uniqueness}, but also by the very few number of runs available for the least present people, as seen in \fref{fig:frames}, which does not permit a proper training of the algorithm. \begin{figure*}[t] \begin{center} \subfloat[Mixture of Gaussians]{ \includegraphics[]{graphics/offline-nb.pdf} \label{fig:offline:nb} } \subfloat[Sequential Hypothesis Learning]{ \includegraphics[]{graphics/offline-sht.pdf} \label{fig:offline:sht} } \caption{Results with 10-fold cross validation for the top $n_p$ most present people} \label{fig:offline} \end{center} \vspace{-1.5\baselineskip} \end{figure*} %\begin{figure}[t] % \begin{center} % \includegraphics[width=0.80\textwidth]{graphics/10fold-naive.pdf} % \end{center} % \caption{Precision-Recall curve for the mixture of Gaussians model % with 10-fold cross validation. The data set is restricted to the top % $n$ most present people} % \label{fig:mixture} %\end{figure} \subsection{Online learning setting} In the second experiment, we evaluate skeleton recognition in an online setting. Even though the previous evaluation is standard, it does not properly reflect reality. A real-life setting could be as follows. The camera is placed at the entrance of a building. When a person enters the building, his identity is detected based on the electronic key system and a new labeled run is added to the data set. The identification algorithm is then retrained on the augmented data set, and the newly obtained classifier can be deployed in the building. In this setting, the sequential hypothesis testing (SHT) algorithm is more suitable than the algorithm used in the previous paragraph, because it accounts for the fact that a person identity does not change across a run. The analysis is therefore performed by partitioning the dataset into 10 subsamples of equal size. For a given threshold, the algorithm is trained and tested incrementally: trained on the first $k$ subsamples (in the chronological order) and tested on the $(k+1)$-th subsample. \fref{fig:online} shows the prediction-recall curve when averaging the prediction rate of the 10 incremental experiments. \begin{figure}[t] %\subfloat[Mixture of Gaussians]{ % \includegraphics[width=0.49\textwidth]{graphics/online-nb.pdf} % \label{fig:online:nb} %} %\subfloat[Sequential hypothesis testing]{ \parbox[t]{0.49\linewidth}{ \begin{center} \includegraphics[width=0.49\textwidth]{graphics/online-sht.pdf} \end{center} \label{fig:online:sht} \vspace{-1.5\baselineskip} \caption{Results for the online setting, where $n_p$ is the size of the group as in Figure~\ref{fig:offline}} \label{fig:online} } \parbox[t]{0.49\linewidth}{ \begin{center} \includegraphics[width=0.49\textwidth]{graphics/face.pdf} \end{center} \vspace{-1.5\baselineskip} \caption{Results for face recognition versus skeleton recognition} \label{fig:face} } \end{figure} \subsection{Face recognition} In the third experiment, we compare the performance of skeleton recognition with the performance of face recognition as given by \textsf{face.com}. At the time of writing, this is the best performing face recognition algorithm on the LFW data set~\footnote{\url{http://vis-www.cs.umass.edu/lfw/results.html}}. The results show that face recognition has better accuracy than skeleton recognition, but not by a large margin. We use the publicly available REST API of \textsf{face.com} to do face recognition on our data set. Due to the restrictions of the API, for this experiment we train on one half of the data and test on the remaining half. For comparison, MoG algorithm is run with the same training-testing partitioning of the data set. In this setting, SHT is not relevant for the comparison, because \textsf{face.com} does not give the possibility to mark a sequence of frames as belonging to the same run. This additional information would be used by the SHT algorithm and would thus bias the results in favor of skeleton recognition. However, this result does not take into account the disparity in the number of runs which face recognition and skeleton recognition can classify frames, which we discuss in the next experiment. \begin{figure}[t] \parbox[t]{0.49\linewidth}{ \begin{center} \includegraphics[width=0.49\textwidth]{graphics/back.pdf} \end{center} \vspace{-1.5\baselineskip} \caption{Results with people walking away from and toward the camera} \label{fig:back} } \parbox[t]{0.49\linewidth}{ \begin{center} \includegraphics[width=0.49\textwidth]{graphics/var.pdf} \end{center} \vspace{-1.5\baselineskip} \caption{Results with and without halving the variance of the noise} \label{fig:var} } \end{figure} \subsection{Walking away} In the next experiment, we include the runs in which people are walking away from the Kinect that we could positively identify. The performance of face recognition ourperforms skeleton recognition the previous setting. However, there are many cases where only skeleton recognition is possible. The most obvious one is when people are walking away from the camera. Coming back to the raw data collected during the experiment design, we manually label the runs of people walking away from the camera. In this case, it is harder to get the ground truth classification and some of runs are dropped because it is not possible to recognize the person. Apart from that, the data set reduction is performed exactly as explained in Section~\ref{sec:experiment-design}. Our results show that we can identify people walking away from the camera just as well as when they are walking towards the camera. %\begin{figure}[t] % \begin{center} % \includegraphics[width=0.80\textwidth]{graphics/back.pdf} % \end{center} % \caption{Precision-Recall curve for the sequential hypothesis % testing algorithm in the online setting with people walking away % from and toward the camera. All the people are included} % \label{fig:back} %\end{figure} \fref{fig:back} compares the curve obtained in \xref{sec:experiment:offline} with people walking toward the camera, with the curve obtained by running the same experiment on the data set of runs of people walking away from the camera. The two curves are sensibly the same. However, one could argue that as the two data sets are completely disjoint, the SHT algorithm is not learning the same profile for a person walking toward the camera and for a person walking away from the camera. The third curve of \fref{fig:back} shows the precision-recall curve when training and testing on the combined dataset of runs toward and away from the camera. \subsection{Reducing the noise} For the final experiment, we study what happens when the noise is reduced on the Kinect. %Predicting potential improvements of the prediction rate of our %algorithm is straightforward. The algorithm relies on 9 features only. %\xref{sec:uniqueness} shows that 6 of these features alone are sufficient to %perfectly distinguish two different skeletons at a low noise level. Therefore, %the only source of classification error in our algorithm is the dispersion of %the observed limbs' lengths away from the exact measurements. To simulate a reduction of the noise level, the data set is modified as follows: we compute the average profile of each person, and for each frame we divide the empirical variance from the average by 2. Formally, if $\bx$ is an observation in the 9-dimensional feature space for the person $i$, and if $\bar{\bx}$ is the average of all the observations available for this person in the data set, then $\bx$ is replaced by $\bx'$ defined by: \begin{equation} \bx' = \bar{\bx} + \frac{\bx-\bar{\bx}}{\sqrt{2}} \end{equation} We believe that a reducing factor of 2 for the noise's variance is realistic given the relative low resolution of the Kinect's infrared camera. \fref{fig:var} compares the Precision-recall curve of \fref{fig:offline:sht} to the curve of the same experiment run on the newly obtained data set. %\begin{figure}[t] % \begin{center} % \includegraphics[width=0.49\textwidth]{graphics/var.pdf} % \end{center} % \vspace{-1.5\baselineskip} % \caption{Results with and without halving the variance of the noise} % \label{fig:var} %\end{figure} %%% Local Variables: %%% mode: latex %%% TeX-master: "kinect" %%% End: