Exploiting textual and visual features for image categorization
- Publication Type:
- Journal Article
- Citation:
- Pattern Recognition Letters, 2019, 117 pp. 140 - 145
- Issue Date:
- 2019-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1-s2.0-S0167865518302216-main.pdf | Published Version | 586.93 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2018 Studies show that refining real-world categories into semantic subcategories contributes to better image modeling and classification. Previous image sub-categorization work relying on labeled images and WordNet's hierarchy is labor-intensive. To tackle this problem, in this work, we extract textual and visual features to automatically select and subsequently classify web images into semantic rich categories. The following two major challenges are well studied: (1) noise in the labels of subcategories derived from the general corpus; (2) noise in the labels of images retrieved from the web. Specifically, we first obtain the semantic refinement subcategories from the text perspective and remove the noise by using the relevance-based approach. To suppress the search error induced noisy images, we then formulate image selection and classifier learning as a multi-instance learning problem and propose to solve the employed problem by the cutting-plane algorithm. The experiments show significant performance gains by using the generated data of our approach on image categorization tasks. The proposed approach also consistently outperforms existing weakly supervised and web-supervised approaches.
Please use this identifier to cite or link to this item: