1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
|
\section{Real-World Evaluation}
\label{sec:experiment}
We conduct a real-life uncontrolled experiment using the Kinect to test our
algorithms. First we describe our approach to data collection. Second we
describe how the data is processed and classified. Finally, we discuss the
results.
\subsection{Dataset}
\label{sec:experiment:dataset}
The Kinect outputs three primary signals in real-time: a color image stream, a
depth image stream, and microphone output (\fref{fig:hallway}). For our
purposes, we focus on the depth image stream. As the Kinect was designed to
interface directly with the Xbox 360, the tools to interact with it on a PC are
limited. The OpenKinect project released
\textsf{libfreenect}~\cite{libfreenect}, a reverse engineered driver which
gives access to the raw depth images of the Kinect. This raw data could be
used to implement skeleton fitting algorithms, \eg those of
Plagemann~\etal{}~\cite{plagemann:icra10}. Alternatively,
OpenNI~\cite{openni}, an open framework led by PrimeSense, the company behind
the technology of the Kinect, offers figure tracking and skeleton fitting
algorithms on top of raw access to the data streams. More recently, the Kinect
for Windows SDK~\cite{kinect-sdk} was released, also with figure tracking
and skeleton fitting algorithms.
%and its skeleton fitting
%algorithm operates in real-time without calibration.
We evaluated both OpenNI and the Kinect SDK for skeleton recognition. The
skeleton fitting algorithm of OpenNI requires each individual to strike a
specific pose for calibration, making it more difficult to collect a lot of
data. We select the Kinect SDK to perform our data collection since it
operates in real-time without calibration.
%Prior to the release of the Kinect SDK, we experimented with using OpenNI for
%skeleton recognition with positive results. Unfortunately, the skeleton
%fitting algorithm of OpenNI requires each individual to strike a specific pose
%for calibration, making it more difficult to collect a lot of data. Upon the
%release of the Kinect SDK, we selected it to perform our data collection, given
%that it is the state-of-the-art and does not require calibration.
We collect data using the Kinect SDK over a period of a week in a research
laboratory setting. The Kinect is placed at the tee of a frequently used
hallway. The view of the Kinect is seen in \fref{fig:hallway}, showing the
color image, the depth image, and the fitted skeleton of a person in a single
frame. Skeletons are fit from roughly 1-5 meters away from the Kinect. For each
frame where a person is detected and a skeleton is fit we capture the 3-D
coordinates of 20 body joints, and the color image.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.99\textwidth]{graphics/hallway.png}
\end{center}
\vspace{-\baselineskip}
\caption{Experiment setting. Color image, depth image, and fitted
skeleton as captured by the Kinect in a single frame}
\label{fig:hallway}
\end{figure}
For some frames, one or several joints are out of the frame or are occluded by
another part of the body. In those cases, the coordinates of these joints are
either absent from the frame or present but tagged as \emph{Inferred} by the
Kinect SDK. Inferred means that even though the joint is not visible in the
frame, the skeleton-fitting algorithm attempts to guess the right location.
\subsection{Experiment design}
\label{sec:experiment-design}
We preprocess the dataset to extract \emph{features}
from the raw data. First, the lengths of 15 body parts are computed from the
joint coordinates. These are distances between two contiguous joints in the
human body. If one of the two joints of a body part is not present or inferred
in a frame, the corresponding body part is reported as absent for the frame.
Second, we reduce the number of features to nine by using the vertical symmetry
of the human body: if two body parts are symmetric about the vertical axis, we
bundle them into one feature by averaging their lengths. If only one of them is
present, we take its value. If neither of them is present, the feature is
reported as missing for the frame. Finally, any frame with a missing feature is
filtered out. The resulting nine features include the six arm, leg, and pelvis
measurements from \xref{sec:uniqueness}, and three additional measurements:
spine length, shoulder breadth, and head size. Here we list the nine features as
pairs of joints:
%The resulting nine features are: Head-ShoulderCenter, ShoulderCenter-Shoulder,
%Shoulder-Elbow, Elbow-Wrist, ShoulderCenter-Spine, Spine-HipCenter,
%HipCenter-HipSide, HipSide-Knee, Knee-Ankle.
\vspace{-1.5\baselineskip}
\begin{table}
\begin{center}
\begin{tabular}{ll}
Head-ShoulderCenter & Spine-HipCenter\\
ShoulderCenter-Shoulder & HipCenter-Hip\\
Shoulder-Elbow & Hip-Knee\\
Elbow-Wrist & Knee-Ankle\\
ShoulderCenter-Spine &\\
\end{tabular}
\end{center}
\end{table}
\vspace{-2.5\baselineskip}
Each detected skeleton also has an ID number obtained from the figure detection
stage. When there are consecutive frames with the same ID, it means that figure
detection was able to track the figure in a contiguous way. This allows us to
define the concept of a \emph{run}: a sequence of frames with the same skeleton
ID. Because of errors in the depth image when a figure enters or exits the
range of the camera, we only keep the frames of a run that are 2-3 meters away
from the Kinect.
Ground truth person identification is obtained by manually labelling each run
based on the images captured by the color camera of the Kinect. For ease of
labelling, only the runs with people walking toward the camera are kept. These
are the runs where the average distance from the skeleton joints to the camera
is increasing.
We perform five experiments. First, we test the performance of
skeleton recognition using traditional 10-fold cross validation, to
represent an offline learning setting. Second, we run our algorithms
in an online learning setting by training and testing the data over
time. Third, we pit skeleton recognition against the state-of-the-art
in face recognition. Next, we test how our solution performs when
people are walking away from the camera. Finally, we study what
happens if the noise from the Kinect is reduced.
%\begin{table}
%\begin{center}
%\caption{Data set statistics. The right part of the table shows the
%average numbers for different intervals of $k$, the rank of a person
%in the ordering given by the number of frames}
%\label{tab:dataset}
%\begin{tabular}{|l|r||r|r|r|}
%\hline
%Number of people & 25 & $k\leq 5$ & $5\leq k\leq 20$ & $k\geq 20$\\
%\hline
%Number of frames & 15945 & 1211 & 561 & 291 \\
%\hline
%Number of runs & 244 & 18 & 8 & 4\\
%\hline
%\end{tabular}
%\end{center}
%\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[]{graphics/frames.pdf}
\end{center}
\vspace{-1.5\baselineskip}
\caption{Distribution of the frequency of each individual in the
dataset}
\label{fig:frames}
\end{figure}
\subsection{Offline learning setting}
\label{sec:experiment:offline}
In the first experiment, we study the accuracy of skeleton recognition using
10-fold cross validation. The dataset is partitioned into 10 continuous time
sequences of equal size. For a given recall threshold, the algorithm is trained
on 9 sequences and tested on the last one. This is repeated for all 10 possible
testing sequences. Averaging the prediction rate over these 10 training-testing
experiments yields the prediction rate for the chosen threshold. We test the
mixture of Gaussians (MoG) and sequential hypothesis testing (SHT) models, with
varying group size $n_p = \{3,5,10,25\}$.
%and find that SHT generally performs better than MoG, and that accuracy
%increases as group size decreases.
\fref{fig:offline} shows the precision-recall plot as the threshold varies.
Both algrithms perform three times better than the majority class baseline of
15\% with a recall of 100\% on all people. We make two main observations.
First, as expected, SHT performs better than MoG because of temporal smoothing.
Second, performance is inversely proportional to group size. As we test
against more people, there are more overlaps between skeleton profiles due to
the noise, as discussed in Section~\ref{sec:uniqueness}. Also, the least
present people have a small number of frames, as seen in \fref{fig:frames},
which may not permit a proper training of the algorithm. For 3 and 5
people (typical family sizes), we see recognition rates mostly above 90\%, and
we reach 90\% accuracy at 60\% recall for a group size of 10 people.
%Several curves are obtained for
%different group sizes: people are ordered based on their frequency of
%appearance (\fref{fig:frames}), and all the frames belonging to people beyond a
%given rank in this ordering are removed. The decrease of performance when
%increasing the number of people in the dataset can be explained by the
%overlaps between skeleton profiles due to the noise, as discussed in
%Section~\ref{sec:uniqueness}, but also by the very few number of runs available
%for the least present people, as seen in \fref{fig:frames}, which does not
%permit a proper training of the algorithm.
\begin{figure*}[t]
\begin{center}
\subfloat[Mixture of Gaussians]{
\includegraphics[]{graphics/offline-nb.pdf}
\label{fig:offline:nb}
}
\subfloat[Sequential Hypothesis Testing]{
\includegraphics[]{graphics/offline-sht.pdf}
\label{fig:offline:sht}
}
\caption{Results with 10-fold cross-validation for the top $n_p$ most present people}
\label{fig:offline}
\end{center}
\vspace{-1.5\baselineskip}
\end{figure*}
%\begin{figure}[t]
% \begin{center}
% \includegraphics[width=0.80\textwidth]{graphics/10fold-naive.pdf}
% \end{center}
% \caption{Precision-Recall curve for the mixture of Gaussians model
% with 10-fold cross validation. The data set is restricted to the top
% $n$ most present people}
% \label{fig:mixture}
%\end{figure}
\subsection{Online learning setting}
In the second experiment, we evaluate skeleton recognition in an online
setting. Even though the previous evaluation is standard, it does not properly
reflect reality. A real-life setting could be as follows. The camera is placed
at the entrance of a building. When a person enters the building, his identity
is detected based on the electronic key system and a new labeled run is added
to the dataset. The identification algorithm is then retrained on the
augmented dataset, and the newly obtained classifier can be deployed in the
building.
%In this setting, the sequential hypothesis testing (SHT) algorithm is more
%suitable than the algorithm used in Section~\ref{sec:experiment:offline}, because it
%accounts for the fact that a person identity does not change across a
%run.
We only evaluate SHT in this setting since it already takes consecutive frames
into account and because it performed better than MoG in the offline setting
(\ref{sec:experiment:offline}). We partition the dataset into 10 time
sequences of equal size. For a given threshold, the algorithm is trained and
tested incrementally: train on the first $k$ sequences (in the chronological
order) and test on the $(k+1)$-th sequence. \fref{fig:online} shows the
prediction-recall curve when averaging the prediction rate over the 10
incremental experiments. Overall performance is worse than in
\fref{fig:offline:sht} since the system trains on less data than in
\ref{sec:experiment:offline} in all but the last step. We still see
recognition rates mostly above 90\% for group sizes of 3 and 5.
\begin{figure}[t]
%\subfloat[Mixture of Gaussians]{
% \includegraphics[width=0.49\textwidth]{graphics/online-nb.pdf}
% \label{fig:online:nb}
%}
%\subfloat[Sequential hypothesis testing]{
\parbox[t]{0.49\linewidth}{
\begin{center}
\includegraphics[width=0.49\textwidth]{graphics/online-sht.pdf}
\end{center}
}
\parbox[t]{0.49\linewidth}{
\begin{center}
\includegraphics[width=0.49\textwidth]{graphics/face.pdf}
\end{center}
}
\end{figure}
\begin{figure}
\vspace{-1.5\baselineskip}
\parbox[t]{0.48\linewidth}{
\caption{Results for the online setting, where $n_p$ is the size of
the group as in Figure~\ref{fig:offline}}
\label{fig:online}
}
\hspace{0.02\linewidth}
\parbox[t]{0.48\linewidth}{
\caption{Results for face recognition versus skeleton recognition}
\label{fig:face}
}
\end{figure}
\subsection{Face recognition}
In the third experiment, we compare the performance of skeleton recognition
with the performance of face recognition as given by \textsf{face.com}. At the
time of writing, this is the best performing face recognition algorithm on the
LFW dataset~\footnote{\url{http://vis-www.cs.umass.edu/lfw/results.html}}.
The results show that face recognition has better accuracy than skeleton
recognition, but not by a large margin.
We use the publicly available REST API of \textsf{face.com} to do face
recognition on our dataset. Due to the restrictions of the API, for this
experiment we train on one half of the data and test on the remaining half. For
comparison, the MoG algorithm is run with the same training-testing partitioning of
the dataset. In this setting, SHT is not relevant for the comparison, because
\textsf{face.com} does not give the possibility to mark a sequence of frames as
belonging to the same run. This additional information would be used by the SHT
algorithm and would thus bias the results in favor of skeleton recognition.
%However, this result does not take into account the disparity in the number of
%runs which face recognition and skeleton recognition can classify frames, which
%we discuss in the next experiment.
\begin{figure}[t]
\parbox[t]{0.49\linewidth}{
\begin{center}
\includegraphics[width=0.49\textwidth]{graphics/back.pdf}
\end{center}
}
\parbox[t]{0.49\linewidth}{
\begin{center}
\includegraphics[width=0.49\textwidth]{graphics/var.pdf}
\end{center}
}
\end{figure}
\begin{figure}
\vspace{-1.5\baselineskip}
\parbox[t]{0.48\linewidth}{
\caption{Results with people walking away from and toward the camera}
\label{fig:back}
}
\hspace{0.02\linewidth}
\parbox[t]{0.48\linewidth}{
\caption{Results with and without halving the variance of the noise}
\label{fig:var}
}
\end{figure}
\subsection{Walking away}
In the next experiment, we include the runs in which people are walking away
from the Kinect that we could positively identify. The performance of face
recognition outperforms skeleton recognition in the previous setting. However,
there are many cases where only skeleton recognition is possible. The most
obvious one is when people are walking away from the camera. Coming back to the
raw data collected during the experiment design, we manually label the runs of
people walking away from the camera. In this case, it is harder to get the
ground truth classification and some of runs are dropped because it is not
possible to recognize the person. Apart from that, the dataset reduction is
performed exactly as explained in Section~\ref{sec:experiment-design}. Our
results show that we can identify people walking away from the camera just as
well as when they are walking towards the camera.
%\begin{figure}[t]
% \begin{center}
% \includegraphics[width=0.80\textwidth]{graphics/back.pdf}
% \end{center}
% \caption{Precision-Recall curve for the sequential hypothesis
% testing algorithm in the online setting with people walking away
% from and toward the camera. All the people are included}
% \label{fig:back}
%\end{figure}
\fref{fig:back} compares the results obtained in \xref{sec:experiment:offline}
with people walking toward the camera, with the results of the
same experiment on the dataset of runs of people walking away from the camera.
The two results are similar. However, one could argue that as the two
datasets are completely disjoint, the SHT algorithm is not learning the same
profile for a person walking toward the camera and for a person walking away
from the camera. The third curve of \fref{fig:back} shows the precision-recall
curve when training and testing on the combined dataset of runs toward and away
from the camera with similar performance.
\subsection{Reducing the noise}
For the final experiment, we study what happens when the noise is reduced on
the Kinect.
%Predicting potential improvements of the prediction rate of our
%algorithm is straightforward. The algorithm relies on 9 features only.
%\xref{sec:uniqueness} shows that 6 of these features alone are sufficient to
%perfectly distinguish two different skeletons at a low noise level. Therefore,
%the only source of classification error in our algorithm is the dispersion of
%the observed limbs' lengths away from the exact measurements.
To simulate a reduction of the noise level, the dataset is modified as
follows: we compute the average profile of each person, and for each frame we
divide the empirical variance from the average by 2. Formally, using
the same notations as in Section~\ref{sec:mixture of Gaussians}, each
observation $\bx_i$ is replaced by $\bx_i'$ defined by:
\begin{equation}
\bx_i' = \bar{\bx}_{y_i} + \frac{\bx_i-\bar{\bx}_{y_i}}{2}
\end{equation}
We believe that a reducing factor of 2 for the noise's variance is realistic
given the relative low resolution of the Kinect's infrared camera.
\fref{fig:var} compares the precision-recall curve of \fref{fig:offline:sht} to
the curve of the same experiment run on the newly obtained dataset. We observe
a roughly 20\% increase in performace across most thresholds. Note that these
results would significantly outperform face recognition.
%\begin{figure}[t]
% \begin{center}
% \includegraphics[width=0.49\textwidth]{graphics/var.pdf}
% \end{center}
% \vspace{-1.5\baselineskip}
% \caption{Results with and without halving the variance of the noise}
% \label{fig:var}
%\end{figure}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kinect"
%%% End:
|