diff options
| author | Thibaut Horel <thibaut.horel@gmail.com> | 2012-03-04 14:01:35 -0800 |
|---|---|---|
| committer | Thibaut Horel <thibaut.horel@gmail.com> | 2012-03-04 14:01:35 -0800 |
| commit | 5f8f434768643cb3f307bd7fc43bf3ad7b9604d3 (patch) | |
| tree | 67ec277a8c8058ee6419350cfaf062baa194e364 | |
| parent | d7b0798050f72f08bcb3995c465efeb9bf9f516d (diff) | |
| download | kinect-5f8f434768643cb3f307bd7fc43bf3ad7b9604d3.tar.gz | |
Finishing taking Brano's comment in section 3
| -rw-r--r-- | uniqueness.tex | 84 |
1 files changed, 40 insertions, 44 deletions
diff --git a/uniqueness.tex b/uniqueness.tex index 2ed0f93..f034a21 100644 --- a/uniqueness.tex +++ b/uniqueness.tex @@ -14,20 +14,20 @@ problem}. In this problem you are given two measurements of the metric and you want to decide whether they come from the same individual (matched pair) or from two different individuals (unmatched pair). -This benchmark is standard for face recognition using the \emph{Labeled Faces -in the Wild} \cite{lfw} database. Raw data of this benchmark is publicly -available and has been derived as follows: the database is split into 10 -subsets. From each of these subsets, 300 matched pairs and 300 unmatched pairs -are randomly chosen. Each algorithm runs 10 separate leave-one-out -cross-validation experiments on these sets of pairs. Averaging the number of -true positives and false positives across the 10 experiments for a given -threshold then yields one point on the receiver operating characteristic (ROC) -curve, which plots the true-positive rate against the false-positive rate as -the threshold of the algorithm varies. Note that in this benchmark the identity -information of the individuals appearing in the pairs is not available, which -means that the algorithms cannot form additional image pairs from the input -data. This is referred to as the \emph{image-restricted} setting in the LFW -benchmark. +This benchmark is standard for face recognition using the +\emph{Labeled Faces in the Wild} \cite{lfw} database. Raw data of +this benchmark is publicly available and has been derived as follows: +the database is split into 10 subsets. From each of these subsets, 300 +matched pairs and 300 unmatched pairs are randomly chosen. Each +algorithm runs 10 separate leave-one-out cross-validation experiments +on these sets of pairs. The average of the false-positive rates and +the true-positive rates across the 10 experiments for a given +threshold gives one operating point on the receiver operating +characteristic (ROC) curve (Figure~\ref{fig:roc}). Note that in this +benchmark the identity information of the individuals appearing in the +pairs is not available, which means that the algorithms cannot form +additional image pairs from the input data. This is referred to as the +\emph{image-restricted} setting in the LFW benchmark. \subsection{Experiment design} @@ -35,32 +35,31 @@ In order to run an experiment similar to the one used in the face pair-matching problem (Section~\ref{sec:frb}), we use the Goldman Osteological Dataset \cite{deadbodies}. This dataset consists of skeletal measurements of 1538 skeletons uncovered around the world and -dating from throughout the last several thousand years. Given the way -these data were collected, only a partial view of the skeleton is -available, we keep six measurements: the lengths of four bones -(radius, humerus, femur, and tibia) and the breadth and height of the -pelvis. Because of missing values, this reduces the size of the -dataset to 1191. +dating throughout the last several thousand years. Given the way these +data were collected, only a partial view of the skeleton is available, +we keep six measurements: the lengths of four bones (radius, humerus, +femur, and tibia) and the breadth and height of the pelvis. Because +of missing values, this reduces the size of the dataset to 1191. From this dataset, 1191 matched pairs and 1191 unmatched pairs are generated. In practice, the exact measurements of the bones of living subjects are not directly accessible. Therefore, measurements are likely to have an error rate, whose variance depends on the method of collection (\eg measuring limbs over clothing versus on bare -skin). Since there is only one sample per skeleton, we simulate this -error by adding independent random Gaussian noise to each measurement -of the pairs. +skin). Since each skeleton appears only once in the dataset, we +simulate this error by adding independent random Gaussian noise to +each measurement of the pairs. \subsection{Results} -We evaluate the performance of the pair-matching problem on the dataset by using a proximity -threshold algorithm: for a given threshold, a pair will be classified -as \emph{matched} if the Euclidean distance between the two skeletons is -lower than the threshold, and \emph{unmatched} otherwise. Formally, let +We evaluate the performance of the pair-matching problem on the +dataset by using a proximity threshold algorithm: for a given +threshold, a pair will be classified as \emph{matched} if the +Euclidean distance between the two skeletons is lower than the +threshold, and \emph{unmatched} otherwise. Formally, let $(\bs_1,\bs_2)$ be an input pair of the algorithm -($\bs_i\in\mathbf{R}_+^{6}$, these are the six bone measurements), -the output of the algorithm for the threshold $\delta$ is -defined as: +($\bs_i\in\mathbf{R}_+^{6}$, are the six bone measurements), the +output of the algorithm for the threshold $\delta$ is defined as: \begin{displaymath} A_\delta(\bs_1,\bs_2) = \begin{cases} 1 & \text{if $d(\bs_1,\bs_2) < \delta$}\\ @@ -73,9 +72,9 @@ defined as: \includegraphics[width=0.6\columnwidth]{graphics/roc.pdf} \end{center} \vspace{-1.5\baselineskip} - \caption{ROC curve for several standard deviations of the - noise and for the state-of-the-art \emph{Associate-Predict} face - detection algorithm. The standard deviation $\sigma$ is shown in millimeters} + \caption{ROC curve for several standard deviations of the noise and + for the state-of-the-art \emph{Associate-Predict} face detection + algorithm. The standard deviation $\sigma$ is shown in millimeters} \label{fig:roc} \end{figure} @@ -88,28 +87,25 @@ the Image-restricted LFW benchmark: \emph{Associate-Predict} The results show that with a standard deviation of 3mm, skeleton proximity thresholding performs quite similarly to face detection at low false-positive rate. At this noise level, the error is smaller -than 1cm with 99.9\% probability. Even with a standard -deviation of 5mm, it is still possible to detect 90\% of the matched -pairs with a false positive rate of 6\%. +than 1cm with 99.9\% probability. Even with a standard deviation of +5mm, it is still possible to detect 90\% of the matched pairs with a +false positive rate of 6\%. This experiment gives an idea of the noise variance level above which -it is not possible to consistently distinguish skeletons. -For this problem, a classifier can be built by first learning +it is not possible to consistently distinguish skeletons. If the noise +is small, a highly accurate classifier can be built by first learning a \emph{skeleton profile} for each individual from all the measurements in the training set. Then, given a new skeleton measurement, the algorithm classifies it to the individual whose skeleton profile is closest to the new measurement. In this case, there are two distinct sources of noise: \begin{itemize} -\item the absolute deviation of the estimator: how far is the estimated profile - from the exact skeleton profile of the person due to figure position or - motion (\ie from walking). +\item the absolute deviation of the estimator: how far is the + estimated profile from the exact skeleton profile of the person due + to figure position or motion (\ie from walking). \item the noise of the new measurement: this comes from the device doing the measurement. \end{itemize} -The combination of these two noise sources is what can be compared to the -noise represented on the ROC curves. Section \label{sec:kinect} will -give more insight on the structure of the noise. %%% Local Variables: %%% mode: latex |
