Multi-label Image Classification with Regional Latent Semantic Dependencies
- Publication Type:
- Journal Article
- IEEE Transactions on Multimedia, 2018
- Issue Date:
IEEE Deep convolution neural networks (CNN) have demonstrated advanced performance on single-label image classification, and various progress also has been made to apply CNN methods on multi-label image classification, which requires annotating objects, attributes, scene categories, etc. in a single shot. Recent state-of-the-art approaches to multi-label image classification exploit the label dependencies in an image, at the global level, largely improving the labeling capacity. However, predicting small objects and visual concepts is still challenging due to the limited discrimination of the global visual features. In this paper, we propose a Regional Latent Semantic Dependencies model (RLSD) to address this problem. The utilized model includes a fully convolutional localization architecture to localize the regions that may contain multiple highly-dependent labels. The localized regions are further sent to the recurrent neural networks (RNN) to characterize the latent semantic dependencies at the regional level. Experimental results on several benchmark datasets show that our proposed model achieves the best performance compared to the state-of-the-art models, especially for predicting small objects occurred in the images. Also, we set up an upper bound model (RLSD+ft-RPN) using bounding box coordinates during training, the experimental results also show that our RLSD can approach the upper bound without using the bounding-box annotations, which is more realistic in the real world.
Please use this identifier to cite or link to this item: