Unsupervised cross-modal retrieval through adversarial learning
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings - IEEE International Conference on Multimedia and Expo, 2017, pp. 1153 - 1158
- Issue Date:
- 2017-08-28
Closed Access
| Filename | Description | Size | |||
|---|---|---|---|---|---|
| UNSUPERVISED CROSS MODAL RETRIEVAL THROUGH ADVERSARIAL LEARNING.pdf | Published version | 623.93 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2017 IEEE. The core of existing cross-modal retrieval approaches is to close the gap between different modalities either by finding a maximally correlated subspace or by jointly learning a set of dictionaries. However, the statistical characteristics of the transformed features were never considered. Inspired by recent advances in adversarial learning and domain adaptation, we propose a novel Unsupervised Cross-modal retrieval method based on Adversarial Learning, namely UCAL. In addition to maximizing the correlations between modalities, we add an additional regularization by introducing adversarial learning. In particular, we introduce a modality classifier to predict the modality of a transformed feature. This can be viewed as a regularization on the statistical aspect of the feature transforms, which ensures that the transformed features are also statistically indistinguishable. Experiments on popular multimodal datasets show that UCAL achieves competitive performance compared to state of the art supervised cross-modal retrieval methods.
Please use this identifier to cite or link to this item:
