Web-based semantic fragment discovery for online lingual-visual similarity
- Publication Type:
- Conference Proceeding
- Citation:
- 31st AAAI Conference on Artificial Intelligence, AAAI 2017, 2017, pp. 182 - 188
- Issue Date:
- 2017-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Web-based semantic fragment discovery for online lingual-visual similarity.pdf | Published version | 3.72 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Copyright © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. In this paper, we present an automatic approach for on-line discovery of visual-lingual semantic fragments from weakly labeled Internet images. Instead of learning region-entity correspondences from well-labeled image-sentence pairs, our approach directly collects and enhances the weakly labeled visual contents from the Web and constructs an adaptive visual representation which automatically links generic lingual phrases to their related visual contents. To ensure reliable and efficient semantic discovery, we adopt non-parametric density estimation to re-rank the related visual instances and proposed a fast self-similarity-based quality assessment method to identify the high-quality semantic fragments. The discovered semantic fragments provide an adaptive joint representation for texts and images, based on which lingual-visual similarity can be defined for further co-analysis of heterogeneous multimedia data. Experimental results on semantic fragment quality assessment, sentence-based image retrieval, automatic multimedia insertion and ordering demonstrated the effectiveness of the proposed framework. The experiments show that the proposed methods can make effective use of the Web knowledge, and are able to generate competitive results compared to state-of-the-art approaches in various tasks.
Please use this identifier to cite or link to this item: