Feature fusion for 3D hand gesture recognition by learning a shared hidden space

Publication Type:
Journal Article
Pattern Recognition Letters, 2012, 33 (4), pp. 476 - 484
Issue Date:
Filename Description Size
Thumbnail2011000306OK.pdf506.09 kB
Adobe PDF
Full metadata record
Hand gesture recognition has been intensively applied in various human-computer interaction (HCI) systems. Different hand gesture recognition methods were developed based on particular features, e.g.; gesture trajectories and acceleration signals. However, it has been noticed that the limitation of either features can lead to flaws of a HCI system. In this paper, to overcome the limitations but combine the merits of both features, we propose a novel feature fusion approach for 3D hand gesture recognition. In our approach, gesture trajectories are represented by the intersection numbers with randomly generated line segments on their 2D principal planes, acceleration signals are represented by the coefficients of discrete cosine transformation (DCT). Then, a hidden space shared by the two features is learned by using penalized maximum likelihood estimation (MLE). An iterative algorithm, composed of two steps per iteration, is derived to for this penalized MLE, in which the first step is to solve a standard least square problem and the second step is to solve a Sylvester equation. We tested our hand gesture recognition approach on different hand gesture sets. Results confirm the effectiveness of the feature fusion method. © 2010 Published by Elsevier B.V. All rights reserved.
Please use this identifier to cite or link to this item: