diff options
| -rw-r--r-- | abstract.tex | 18 | ||||
| -rw-r--r-- | algorithm.tex | 77 | ||||
| -rwxr-xr-x | data/face-test.py | 3 | ||||
| -rwxr-xr-x | data/face-train.py | 3 | ||||
| -rw-r--r-- | experimental.tex | 119 | ||||
| -rw-r--r-- | intro.tex | 37 | ||||
| -rw-r--r-- | related.tex | 24 | ||||
| -rw-r--r-- | uniqueness.tex | 14 |
8 files changed, 185 insertions, 110 deletions
diff --git a/abstract.tex b/abstract.tex index 9386d11..4940b90 100644 --- a/abstract.tex +++ b/abstract.tex @@ -1,11 +1,11 @@ \begin{abstract} - This paper explores a novel approach for person recognition based on - skeletal measurements. After showing that exact measurements allow - for accurate recognition, we study two algorithmic approaches for - identification in case of approximate measurements. A real-life - experiment with 25 people and measurements obtained from the Kinect - range camera gives us promising results and comparison with state of - the art facial recognition validates the viability of - skeleton-based identification. -\end{abstract} + This paper explores a novel approach for person recognition based on skeletal + measurements. After showing that exact measurements allow for accurate + recognition in a large dataset, we study two algorithmic approaches for + recognition given approximate measurements. We perform a real-world + experiment with measurements captured from the Kinect and obtain 95\% + accuracy with three people and 85\% accuracy with five people. Our results + and a comparison with state of the art facial recognition validate the + viability of skeleton-based recognition. + \end{abstract} diff --git a/algorithm.tex b/algorithm.tex index b3e7648..9bc0fd6 100644 --- a/algorithm.tex +++ b/algorithm.tex @@ -1,29 +1,69 @@ \section{Algorithms} \label{sec:algorithms} -In Section~\ref{sec:uniqueness}, we showed that a nearest-neighbor classifier can accurately predict if a skeleton belongs to the same person if the error of the skeleton measurements is small. In this section, we suggest a probabilistic model for skeleton recognition. In this model, a skeleton is classified based on the distance from average skeleton profiles of people in the training set. +In Section~\ref{sec:uniqueness}, we showed that a nearest-neighbor classifier +can accurately predict if two sets of skeletal measurements belong to the same +person if the error of the skeleton measurements is small. In this section, we +suggest a probabilistic model for skeleton recognition. In this model, a +skeleton is classified based on the distance from average skeleton profiles of +people in the training set. \subsection{Mixture of Gaussians} \label{sec:mixture of Gaussians} -A mixture of Gaussians \cite{bishop06pattern} is a generative probabilistic model, which is typically applied to modeling problems where class densities are unimodal and the feature space is low-dimensional. The joint probability distribution of the model is given by: +A mixture of Gaussians \cite{bishop06pattern} is a generative probabilistic +model, which is typically applied to modeling problems where class densities +are unimodal and the feature space is low-dimensional. The joint probability +distribution of the model is given by: \begin{align} P(\bx, y) = \cN(\bx | \bar{\bx}_y, \Sigma) P(y), \label{eq:mixture of Gaussians} \end{align} -where $P(y)$ is the probability of class $y$ and $\cN(\bx | \bar{\bx}_y, \Sigma)$ is a multivariate normal distribution, which models the density of $\bx$ given $y$. The mean of the distribution is $\bar{\bx}_y$ and the variance of $\bx$ is captured by the covariance matrix $\Sigma$. The decision boundary between any two classes is known to be is linear when all conditionals $\cN(\bx | \bar{\bx}_y, \Sigma)$ have the same covariance matrix \cite{bishop06pattern}. In this setting, the mixture of Gaussians model can be viewed as a probabilistic variant of the nearest-neighbor (NN) classifier in Section~\ref{sec:uniqueness}. +where $P(y)$ is the probability of class $y$ and $\cN(\bx | \bar{\bx}_y, +\Sigma)$ is a multivariate normal distribution, which models the density of +$\bx$ given $y$. The mean of the distribution is $\bar{\bx}_y$ and the variance +of $\bx$ is captured by the covariance matrix $\Sigma$. The decision boundary +between any two classes is known to be is linear when all conditionals $\cN(\bx +| \bar{\bx}_y, \Sigma)$ have the same covariance matrix \cite{bishop06pattern}. +In this setting, the mixture of Gaussians model can be viewed as a +probabilistic variant of the nearest-neighbor (NN) classifier in +Section~\ref{sec:uniqueness}. + +The mixture of Gaussians model has many advantages. First, the model can be +easily learned using maximum-likelihood (ML) estimation \cite{bishop06pattern}. +In particular, $P(y)$ is the frequency of $y$ in the training set, +$\bar{\bx}_y$ is the expectation of $\bx$ given $y$, and the covariance matrix +is computed as $\Sigma = \sum_y P(y) \Sigma_y$, where $\Sigma_y$ represents the +covariance of $\bx$ given $y$. Second, the inference in the model can be +performed in a closed form. In particular, the model predicts $\hat{y} = +\arg\max_y P(y | \bx)$, where: -The mixture of Gaussians model has many advantages. First, the model can be easily learned using maximum-likelihood (ML) estimation \cite{bishop06pattern}. In particular, $P(y)$ is the frequency of $y$ in the training set, $\bar{\bx}_y$ is the expectation of $\bx$ given $y$, and the covariance matrix is computed as $\Sigma = \sum_y P(y) \Sigma_y$, where $\Sigma_y$ represents the covariance of $\bx$ given $y$. Second, the inference in the model can be performed in a closed form. In particular, the model predicts $\hat{y} = \arg\max_y P(y | \bx)$, where: \begin{align} P(y | \bx) = \frac{P(\bx | y) P(y)}{\sum_y P(\bx | y) P(y)} = \frac{\cN(\bx | \bar{\bx}_y, \Sigma) P(y)}{\sum_y \cN(\bx | \bar{\bx}_y, \Sigma) P(y)}. \label{eq:inference} \end{align} -In practice, the prediction $\hat{y}$ is accepted when the classifier is confident. In other words, $P(\hat{y} | \bx) \! > \! \delta$, where $\delta \in (0, 1)$ is a threshold that controls the precision and recall of the classifier. In general, the higher the threshold $\delta$, the lower the recall and the higher the precision. -In this work, we use the mixture of Gaussians model for skeleton recognition. Skeleton measurements are represented by a vector $\bx$ and each person is assigned to one class $y$. In particlar, our dataset $\cD = \set{(\bx_1, y_1), \dots, (\bx_n, y_n)}$ consists of $n$ pairs $(\bx_i, y_i)$, where $y_i$ is the label of the skeleton $\bx_i$. To verify that our method is suitable for skeleton recognition, we plot for each skeleton feature $x_k$ (Section~\ref{sec:experiment}) the histogram of differences between all measurements and the expectation given the class $(\bx_i)_k - \E{}{x_k | y_i}$ (Figure~\ref{fig:error marginals}). All histograms look approximately normal. This indicates that all class conditionals $P(\bx | y)$ are multivariate normal and our generative model, although very simple, may be nearly optimal \cite{bishop06pattern}. +In practice, the prediction $\hat{y}$ is accepted when the classifier is +confident. In other words, $P(\hat{y} | \bx) \! > \! \delta$, where $\delta \in +(0, 1)$ is a threshold that controls the precision and recall of the +classifier. In general, the higher the threshold $\delta$, the lower the recall +and the higher the precision. + +In this work, we use the mixture of Gaussians model for skeleton recognition. +Skeleton measurements are represented by a vector $\bx$ and each person is +assigned to one class $y$. In particlar, our dataset $\cD = \set{(\bx_1, y_1), +\dots, (\bx_n, y_n)}$ consists of $n$ pairs $(\bx_i, y_i)$, where $y_i$ is the +label of the skeleton $\bx_i$. To verify that our method is suitable for +skeleton recognition, we plot for each skeleton feature $x_k$ +(Section~\ref{sec:experiment}) the histogram of differences between all +measurements and the expectation given the class $(\bx_i)_k - \E{}{x_k | y_i}$ +(Figure~\ref{fig:error marginals}). All histograms look approximately normal. +This suggests that all class conditionals $P(\bx | y)$ are multivariate normal +and our generative model, although very simple, may be nearly optimal +\cite{bishop06pattern}. \begin{figure}[t] \centering @@ -39,15 +79,32 @@ In this work, we use the mixture of Gaussians model for skeleton recognition. Sk \subsection{Sequential hypothesis testing} \label{sec:SHT} -The mixture of Gaussians model can be extended to temporal inference through sequential hypothesis testing. Sequential hypothesis testing \cite{wald47sequential} is an established statistical framework, where a subject is sequentially tested for belonging to one of several classes. The probability that the sequence of data $\bx^{(1)}, \dots, \bx^{(t)}$ belongs to the class $y$ at time $t$ is given by: +The mixture of Gaussians model can be extended to temporal inference through +sequential hypothesis testing. Sequential hypothesis testing +\cite{wald47sequential} is an established statistical framework, where a +subject is sequentially tested for belonging to one of several classes. The +probability that the sequence of data $\bx^{(1)}, \dots, \bx^{(t)}$ belongs to +the class $y$ at time $t$ is given by: + \begin{align} P(y | \bx^{(1)}, \dots, \bx^{(t)}) = \frac{\prod_{i = 1}^t \cN(\bx^{(i)} | \bar{\bx}_y, \Sigma) P(y)} {\sum_y \prod_{i = 1}^t \cN(\bx^{(i)} | \bar{\bx}_y, \Sigma) P(y)}. \label{eq:SHT} \end{align} -In practice, the prediction $\hat{y} = \arg\max_y P(y | \bx^{(1)}, \dots, \bx^{(t)})$ is accepted when the classifier is confident. In other words, $P(\hat{y} | \bx^{(1)}, \dots, \bx^{(t)}) > \delta$, where the threshold $\delta \in (0, 1)$ controls the precision and recall of the predictor. In general, the higher the threshold $\delta$, the higher the precision and the lower the recall. +In practice, the prediction $\hat{y} = \arg\max_y P(y | \bx^{(1)}, \dots, +\bx^{(t)})$ is accepted when the classifier is confident. In other words, +$P(\hat{y} | \bx^{(1)}, \dots, \bx^{(t)}) > \delta$, where the threshold +$\delta \in (0, 1)$ controls the precision and recall of the predictor. In +general, the higher the threshold $\delta$, the higher the precision and the +lower the recall. -Sequential hypothesis testing is a common technique for smoothing temporal predictions. In particular, note that the prediction at time $t$ depends on all data up to time $t$. This reduces the variance of predictions, especially when input data are noisy, such as in the domain of skeleton recognition. +Sequential hypothesis testing is a common technique for smoothing temporal +predictions. In particular, note that the prediction at time $t$ depends on all +data up to time $t$. This reduces the variance of predictions, especially when +input data are noisy, such as in the domain of skeleton recognition. -In skeleton recognition, the sequence $\bx^{(1)}, \dots, \bx^{(t)}$ are skeleton measurements of a person walking towards the camera, for instance. If the camera detects more people, we use tracking to distinguish individual skeleton sequences. +In skeleton recognition, the sequence $\bx^{(1)}, \dots, \bx^{(t)}$ is skeleton +measurements of a person walking towards the camera, for instance. If the +Kinect detects multiple people, we use the figure tracking of the tools in +\xref{sec:experiment:dataset} to distinguish individual skeleton sequences. diff --git a/data/face-test.py b/data/face-test.py index fed233a..08a0b37 100755 --- a/data/face-test.py +++ b/data/face-test.py @@ -27,7 +27,8 @@ except: ns = sys.argv[1] dataset = sys.argv[2] pic_dir = sys.argv[3] -exclude = ['Anmol', 'Nina', 'Scott'] +#exclude = ['Anmol', 'Nina', 'Scott'] +exclude = [] for line in open(dataset): line = line.strip().split(',') diff --git a/data/face-train.py b/data/face-train.py index f27074d..4fcee4c 100755 --- a/data/face-train.py +++ b/data/face-train.py @@ -9,7 +9,8 @@ import time import pickle users = pickle.load(open(sys.argv[1])) -exclude = ['Anmol', 'Nina', 'Scott'] +#exclude = ['Anmol', 'Nina', 'Scott'] +exclude = [] ns = sys.argv[2] api_key = '34a84a7835bf24df2d84b4bded84e838' diff --git a/experimental.tex b/experimental.tex index f513b92..dee0626 100644 --- a/experimental.tex +++ b/experimental.tex @@ -1,40 +1,51 @@ \section{Real-World Evaluation} \label{sec:experiment} -We conduct a real-life uncontrolled experiment using the Kinect to test to the -algorithm. First we describe our approach to -data collection. Second we describe how the data is processed and classified. -Finally, we discuss the results. +We conduct a real-life uncontrolled experiment using the Kinect to test our +algorithms. First we describe our approach to data collection. Second we +describe how the data is processed and classified. Finally, we discuss the +results. \subsection{Dataset} +\label{sec:experiment:dataset} The Kinect outputs three primary signals in real-time: a color image stream, a depth image stream, and microphone output (\fref{fig:hallway}). For our purposes, we focus on the depth image stream. As the Kinect was designed to interface directly with the Xbox 360, the tools to interact with it on a PC are -limited. The OpenKinect project released libfreenect~\cite{libfreenect}, a -reverse engineered driver which gives access to the raw depth images of the -Kinect. This raw data could be used to implement skeleton fitting algorithms, -\eg those of Plagemann~\etal{}~\cite{plagemann:icra10}. Alternatively, +limited. The OpenKinect project released +\textsf{libfreenect}~\cite{libfreenect}, a reverse engineered driver which +gives access to the raw depth images of the Kinect. This raw data could be +used to implement skeleton fitting algorithms, \eg those of +Plagemann~\etal{}~\cite{plagemann:icra10}. Alternatively, OpenNI~\cite{openni}, an open framework led by PrimeSense, the company behind -the technology of the Kinect, offers figure detection and skeleton fitting +the technology of the Kinect, offers figure tracking and skeleton fitting algorithms on top of raw access to the data streams. More recently, the Kinect -for Windows SDK~\cite{kinect-sdk} was released, and its skeleton fitting -algorithm operates in real-time without calibration. +for Windows SDK~\cite{kinect-sdk} was released, also with figure tracking +and skeleton fitting algorithms. +%and its skeleton fitting +%algorithm operates in real-time without calibration. -Prior to the release of the Kinect SDK, we experimented with using OpenNI for -skeleton recognition with positive results. Unfortunately, the skeleton -fitting algorithm of OpenNI requires each individual to strike a specific pose -for calibration, making it more difficult to collect a lot of data. Upon the -release of the Kinect SDK, we selected it to perform our data collection, given -that it is the state-of-the-art and does not require calibration. +We evaluated both OpenNI and the Kinect SDK for skeleton recognition. The +skeleton fitting algorithm of OpenNI requires each individual to strike a +specific pose for calibration, making it more difficult to collect a lot of +data. We select the Kinect SDK to perform our data collection since it +operates in real-time without calibration. + +%Prior to the release of the Kinect SDK, we experimented with using OpenNI for +%skeleton recognition with positive results. Unfortunately, the skeleton +%fitting algorithm of OpenNI requires each individual to strike a specific pose +%for calibration, making it more difficult to collect a lot of data. Upon the +%release of the Kinect SDK, we selected it to perform our data collection, given +%that it is the state-of-the-art and does not require calibration. We collect data using the Kinect SDK over a period of a week in a research laboratory setting. The Kinect is placed at the tee of a well traversed hallway. The view of the Kinect is seen in \fref{fig:hallway}, showing the color image, the depth image, and the fitted skeleton of a person in a single -frame. For each frame where a person is detected and a skeleton is fitted we -capture the 3D coordinates of 20 body joints, and the color image. +frame. Skeletons are fit from \~1-5 meters away from the Kinect. For each +frame where a person is detected and a skeleton is fit we capture the 3-D +coordinates of 20 body joints, and the color image. \begin{figure}[t] \begin{center} @@ -53,16 +64,10 @@ Kinect SDK. Inferred means that even though the joint is not visible in the frame, the skeleton-fitting algorithm attempts to guess the right location. -Ground truth person identification is obtained by manually labelling each run -based on the images captured by the RGB camera of the Kinect. For ease of -labelling, only the runs with people walking toward the camera are kept. These -are the runs where the average distance from the skeleton joints to the camera -is increasing. - \subsection{Experiment design} \label{sec:experiment-design} -We preprocess the data set to extract \emph{features} +We preprocess the dataset to extract \emph{features} from the raw data. First, the lengths of 15 body parts are computed from the joint coordinates. These are distances between two contiguous joints in the human body. If one of the two joints of a body part is not present or inferred @@ -94,11 +99,19 @@ ShoulderCenter-Spine &\\ \end{table} \vspace{-2.5\baselineskip} -Each detected skeleton also has an ID number which identifies the figure it -maps to from the figure detection stage. When there are consecutive frames with -the same ID, it means that the skeleton-fitting algorithm was able to detect -the skeleton in a contiguous way. This allows us to define the concept of a -\emph{run}: a sequence of frames with the same skeleton ID. +Each detected skeleton also has an ID number obtained from the figure detection +stage. When there are consecutive frames with the same ID, it means that figure +detection was able to track the figure in a contiguous way. This allows us to +define the concept of a \emph{run}: a sequence of frames with the same skeleton +ID. Because of errors in the depth image when a figure enters or exits the +range of the camera, we only keep the frames of a run that are 2-3 meters away +from the Kinect. + +Ground truth person identification is obtained by manually labelling each run +based on the images captured by the color camera of the Kinect. For ease of +labelling, only the runs with people walking toward the camera are kept. These +are the runs where the average distance from the skeleton joints to the camera +is increasing. We perform five experiments. First, we test the performance of skeleton recognition using traditional 10-fold cross validation, to @@ -133,23 +146,23 @@ happens if the noise from the Kinect is reduced. \end{center} \vspace{-1.5\baselineskip} \caption{Distribution of the frequency of each individual in the - data set} + dataset} \label{fig:frames} \end{figure} \subsection{Offline learning setting} \label{sec:experiment:offline} -In the first experiment, we study the accuracy of skeleton recognition -using 10-fold cross validation. The data set is partitioned into 10 -continuous time sequences of equal size. For a given recall threshold, -the algorithm is trained on 9 sequences and tested on the last -one. This is repeated for all 10 possible testing sequences. Averaging -the prediction rate over these 10 training-testing experiments yields -the prediction rate for the chosen threshold. We test the mixture of -Gaussians (MoG) and sequential hypothesis testing (SHT) models, and -find that SHT generally performs better than MoG, and that accuracy -increases as group size decreases. +In the first experiment, we study the accuracy of skeleton recognition using +10-fold cross validation. The dataset is partitioned into 10 continuous time +sequences of equal size. For a given recall threshold, the algorithm is trained +on 9 sequences and tested on the last one. This is repeated for all 10 possible +testing sequences. Averaging the prediction rate over these 10 training-testing +experiments yields the prediction rate for the chosen threshold. We test the +mixture of Gaussians (MoG) and sequential hypothesis testing (SHT) models, with +varying group size $n_p = \{3,5,10,25\}$. +%and find that SHT generally performs better than MoG, and that accuracy +%increases as group size decreases. \fref{fig:offline} shows the precision-recall plot as the threshold varies. @@ -158,7 +171,7 @@ Both algrithms perform three times better than the majority class baseline of different group sizes: people are ordered based on their frequency of appearance (\fref{fig:frames}), and all the frames belonging to people beyond a given rank in this ordering are removed. The decrease of performance when -increasing the number of people in the data set can be explained by the +increasing the number of people in the dataset can be explained by the overlaps between skeleton profiles due to the noise, as discussed in Section~\ref{sec:uniqueness}, but also by the very few number of runs available for the least present people, as seen in \fref{fig:frames}, which does not @@ -197,8 +210,8 @@ setting. Even though the previous evaluation is standard, it does not properly reflect reality. A real-life setting could be as follows. The camera is placed at the entrance of a building. When a person enters the building, his identity is detected based on the electronic key system and a new labeled run is added -to the data set. The identification algorithm is then retrained on the -augmented data set, and the newly obtained classifier can be deployed in the +to the dataset. The identification algorithm is then retrained on the +augmented dataset, and the newly obtained classifier can be deployed in the building. In this setting, the sequential hypothesis testing (SHT) algorithm is more @@ -248,15 +261,15 @@ experiments. In the third experiment, we compare the performance of skeleton recognition with the performance of face recognition as given by \textsf{face.com}. At the time of writing, this is the best performing face recognition algorithm on the -LFW data set~\footnote{\url{http://vis-www.cs.umass.edu/lfw/results.html}}. +LFW dataset~\footnote{\url{http://vis-www.cs.umass.edu/lfw/results.html}}. The results show that face recognition has better accuracy than skeleton recognition, but not by a large margin. We use the publicly available REST API of \textsf{face.com} to do face -recognition on our data set. Due to the restrictions of the API, for this +recognition on our dataset. Due to the restrictions of the API, for this experiment we train on one half of the data and test on the remaining half. For comparison, MoG algorithm is run with the same training-testing partitioning of -the data set. In this setting, SHT is not relevant for the comparison, because +the dataset. In this setting, SHT is not relevant for the comparison, because \textsf{face.com} does not give the possibility to mark a sequence of frames as belonging to the same run. This additional information would be used by the SHT algorithm and would thus bias the results in favor of skeleton recognition. @@ -299,7 +312,7 @@ obvious one is when people are walking away from the camera. Coming back to the raw data collected during the experiment design, we manually label the runs of people walking away from the camera. In this case, it is harder to get the ground truth classification and some of runs are dropped because it is not -possible to recognize the person. Apart from that, the data set reduction is +possible to recognize the person. Apart from that, the dataset reduction is performed exactly as explained in Section~\ref{sec:experiment-design}. Our results show that we can identify people walking away from the camera just as well as when they are walking towards the camera. @@ -316,9 +329,9 @@ well as when they are walking towards the camera. \fref{fig:back} compares the curve obtained in \xref{sec:experiment:offline} with people walking toward the camera, with the curve obtained by running the -same experiment on the data set of runs of people walking away from the camera. +same experiment on the dataset of runs of people walking away from the camera. The two curves are sensibly the same. However, one could argue that as the two -data sets are completely disjoint, the SHT algorithm is not learning the same +datasets are completely disjoint, the SHT algorithm is not learning the same profile for a person walking toward the camera and for a person walking away from the camera. The third curve of \fref{fig:back} shows the precision-recall curve when training and testing on the combined dataset of runs toward and away @@ -334,7 +347,7 @@ the Kinect. %perfectly distinguish two different skeletons at a low noise level. Therefore, %the only source of classification error in our algorithm is the dispersion of %the observed limbs' lengths away from the exact measurements. -To simulate a reduction of the noise level, the data set is modified as +To simulate a reduction of the noise level, the dataset is modified as follows: we compute the average profile of each person, and for each frame we divide the empirical variance from the average by 2. Formally, using the same notations as in Section~\ref{sec:mixture of Gaussians}, each @@ -348,7 +361,7 @@ camera. \fref{fig:var} compares the Precision-recall curve of \fref{fig:offline:sht} to the curve of the same experiment run on -the newly obtained data set. +the newly obtained dataset. %\begin{figure}[t] % \begin{center} @@ -1,11 +1,12 @@ \section{Introduction} \label{sec:intro} -Person recognition has become a valuable asset, whether for means of -authentication, personalization, or other applications. Previous work revolves -around either physiological biometrics, such as face recognition, or behavioral -biometrics such as gait recognition. In this paper, we propose using -skeletal measurements as a new physiological biometric for recognition. +Person recognition has become a valuable tool, whether for means of +authentication, personalization, or other applications. Previous work in +person recognition uses either physiological biometrics, such as facial +features, or behavioral biometrics like gait analysis. In this paper, we +propose skeletal measurements as a new physiological biometric for +recognition. In recent years, advances in range cameras have given us access to increasingly accurate real-time depth imaging. Furthermore, the low-cost and widely @@ -13,26 +14,28 @@ available Kinect~\cite{kinect} has brought range imaging to the masses. In parallel, the automatic detection of body parts from depth images has led to real-time skeleton fitting. -In this paper we show that skeleton fitting is accurate and unique enough in -individuals to be used for person recognition. We make the following -contributions. First, we show that ground truth skeleton measurements can -uniquely identify a person. Second, we evaluate our hypothesis using -real-world data collected with the Kinect. Our results show that skeleton -recognition performs quite well, particularly in situations where face -recognition cannot be performed. +%In this paper we show that skeleton fitting is accurate and unique enough in +%individuals to be used for person recognition. +We make the following contributions. First, we show that ground truth skeleton +measurements can uniquely identify a person. Second, we propose two models for +skeleton recognition. Finally, we evaluate our hypothesis using real-world +data collected with the Kinect. Our results show that skeleton recognition can +identify three people with 95\% accuracy, and five people with 85\% accuracy. +Furthermore, skeleton recognition can be performed in more situations than face +recognition, such as when a person is not facing the camera. %As the resolution and accuracy of range cameras improve, so will the accuracy %and precision of skeleton fitting algorithms. -Much of the prior work in person recognition focuses on data gathered from -other sensors, such as face recognition with color images and voice -recognition with microphones. In the realm of depth imaging, most of the work -surrounds behavioral recognition, continuing work in gait recognition. +%Much of the prior work in person recognition focuses on data gathered from +%other sensors, such as face recognition with color images and voice +%recognition with microphones. In the realm of depth imaging, most of the work +%surrounds behavioral recognition, continuing work in gait recognition. The paper is organized as follows. First we discuss prior methods of person recognition, in addition to the advances in the technologies pertaining to skeleton fitting (Section~\ref{sec:related}). Next we use a dataset of actual skeletal measurements to show that -recognition by skeleton is feasible +skeletons are a unique enough descriptor for person recognition. (Section~\ref{sec:uniqueness}). We then discuss an error model and the resulting algorithm to do person recognition (Section~\ref{sec:algorithms}). Finally, we collect skeleton data with diff --git a/related.tex b/related.tex index d804e87..f6ac87f 100644 --- a/related.tex +++ b/related.tex @@ -50,23 +50,23 @@ age happen gradually). We discuss how uniqueness is met in detail in By using skeleton as a biometric for recognition, we can formulate skeleton recognition in a similar way as we can face recognition. The equivalent parts would be figure detection, skeleton fitting, and classification. Figure -detection and skeleton fitting map to silhouette extraction and model fitting -in gait detection, but as previously noted, they are severely limited. -However, Zhao~\etal~\cite{zhao20063d} perform gait recognition in 3-D using -multiple cameras. By moving to 3-D, many of the problems related to silhouette -extraction and model fitting are removed. Additionally, by moving to 3-D, we -can take advantage of the wealth of research relating to motion +detection and skeleton fitting respectively map to silhouette extraction and +model fitting in gait detection, but as previously noted, they are severely +limited. However, Zhao~\etal~\cite{zhao20063d} perform gait recognition in 3-D +using multiple cameras. By moving to 3-D, many of the problems related to +silhouette extraction and model fitting are removed. Additionally we can take +advantage of the wealth of research relating to 3-D motion capture~\cite{mocap-survey}. Specifically, range cameras offer real-time depth imaging, and the Kinect~\cite{kinect} in particular is a widely available range camera with a low price point. Figure detection and skeleton fitting have also been studied in motion capture, mapping to region of interest detectors and human body part identification or pose estimation respectively in this -context~\cite{plagemann:icra10,ganapathi:cvpr10,shotton:cvpr11}. -Furthermore, OpenNI~\cite{openni} and the Kinect for Windows -SDK~\cite{kinect-sdk} are two systems that perform figure detection and -skeleton fitting for the Kinect. Given the maturity of the solutions, we will -use implementations of figure detection and skeleton fitting. Therefore this -paper will focus primarily on the classification part of skeleton recognition. +context~\cite{plagemann:icra10,ganapathi:cvpr10,shotton:cvpr11}. Furthermore, +OpenNI~\cite{openni} and the Kinect for Windows SDK~\cite{kinect-sdk} are two +systems that perform figure detection and skeleton fitting for the Kinect. +Given the maturity of the solutions, we will use implementations of figure +detection and skeleton fitting. Therefore this paper will focus primarily on +the classification part of skeleton recognition. %a person from an image to measure gait, but can also be measured from floor diff --git a/uniqueness.tex b/uniqueness.tex index 66f21a4..ce1cd3d 100644 --- a/uniqueness.tex +++ b/uniqueness.tex @@ -34,14 +34,14 @@ additional image pairs from the input data. This is referred to as the In order to run an experiment similar to the one used in the face pair-matching problem (Section~\ref{sec:frb}), we use the Goldman Osteological Dataset \cite{deadbodies}. This dataset consists of -skeletal measurements of 1538 skeletons uncovered around the world and -dating throughout the last several thousand years. Given the way these -data were collected, only a partial view of the skeleton is available, -we keep six measurements: the lengths of four bones (radius, humerus, -femur, and tibia) and the breadth and height of the pelvis. Because -of missing values, this reduces the size of the dataset to 1191. +skeletal measurements of 1,538 skeletons uncovered around the world and dating +from the modern geological era. Given the way this data was collected, only a +partial view of the skeleton is available. We keep six measurements: the +lengths of four bones (radius, humerus, femur, and tibia) and the breadth and +height of the pelvis. Because of missing values, this reduces the size of the +dataset to 1,191. -From this dataset, 1191 matched pairs and 1191 unmatched pairs are +From this dataset, 1,191 matched pairs and 1,191 unmatched pairs are generated. In practice, the exact measurements of the bones of living subjects are not directly accessible. Therefore, measurements are likely to have an error rate, whose variance depends on the method of |
