Adversarial Action Data Augmentation for Similar Gesture Action Recognition
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- 2019 International Joint Conference on Neural Networks (IJCNN), 2019, 2019-July
- Issue Date:
- 2019-07-19
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
IJCNN 2019 - Di - Paper.pdf | Published version | 3.07 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Human gestures are unique for recognizing and describing human actions, and video-based human action recognition techniques are effective solutions to varies real-world applications, such as surveillance, video indexing, and human-computer interaction. Most existing video human action recognition approaches either using handcraft features from the frames or deep learning models such as convolutional neural networks (CNN) and recurrent neural networks (RNN); however, they have mostly overlooked the similar gestures between different actions when processing the frames into the models. The classifiers suffer from similar features extracted from similar gestures, which are unable to classify the actions in the video streams. In this paper, we propose a novel framework with generative adversarial networks (GAN) to generate the data augmentation for similar gesture action recognition. The contribution of our work is tri-fold: 1) we proposed a novel action data augmentation framework (ADAF) to enlarge the differences between the actions with very similar gestures; 2) the framework can boost the classification performance either on similar gesture action pairs or the whole dataset; 3) experiments conducted on both KTH and UCF101 datasets show that our data augmentation framework boost the performance on both similar gestures actions as well as the whole dataset compared with baseline methods such as 2DCNN and 3DCNN.
Please use this identifier to cite or link to this item: