Automatic image dataset construction with multiple textual metadata
- Publication Type:
- Conference Proceeding
- Proceedings - IEEE International Conference on Multimedia and Expo, 2016, 2016-August
- Issue Date:
Files in This Item:
|Automatic+Image+Dataset+Construction+with+Multiple+Textual+Metadata.pdf||Accepted Manuscript version||490.38 kB|
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
© 2016 IEEE. The goal of this work is to automatically collect a large number of highly relevant images from the Internet for given queries. A novel image dataset construction framework is proposed by employing multiple textual metadata. In specific, the given queries are first expanded by searching in the Google Books Ngrams Corpora to obtain a richer semantic description, from which the visually non-salient and less relevant expansions are then filtered. After retrieving images from the Internet with filtered expansions, we further filter noisy images by clustering and progressively Convolutional Neural Networks (CNN). To verify the effectiveness of our proposed method, we construct a dataset with 10 categories, which is not only much larger than but also have comparable cross-dataset generalization ability with manually labeled dataset STL-10 and CIFAR-10.
Please use this identifier to cite or link to this item: