Multilabel Image Classification with Regional Latent Semantic Dependencies

Publication Type:
Journal Article
Citation:
IEEE Transactions on Multimedia, 2018, 20 (10), pp. 2801 - 2813
Issue Date:
2018-10-01
Metrics:
Full metadata record
Files in This Item:
Filename Description Size
08310600-final published version.pdfPublished Version2.49 MB
Adobe PDF
© 1999-2012 IEEE. Deep convolution neural networks (CNNs) have demonstrated advanced performance on single-label image classification, and various progress also has been made to apply CNN methods on multilabel image classification, which requires annotating objects, attributes, scene categories, etc., in a single shot. Recent state-of-the-art approaches to the multilabel image classification exploit the label dependencies in an image, at the global level, largely improving the labeling capacity. However, predicting small objects and visual concepts is still challenging due to the limited discrimination of the global visual features. In this paper, we propose a regional latent semantic dependencies model (RLSD) to address this problem. The utilized model includes a fully convolutional localization architecture to localize the regions that may contain multiple highly dependent labels. The localized regions are further sent to the recurrent neural networks to characterize the latent semantic dependencies at the regional level. Experimental results on several benchmark datasets show that our proposed model achieves the best performance compared to the state-of-the-art models, especially for predicting small objects occurring in the images. Also, we set up an upper bound model (RLSD+ft-RPN) using bounding-box coordinates during training, and the experimental results also show that our RLSD can approach the upper bound without using the bounding-box annotations, which is more realistic in the real world.
Please use this identifier to cite or link to this item: