ROMIR: Robust Multi-View Image Re-Ranking
- Publisher:
- Institute of Electrical and Electronics Engineers
- Publication Type:
- Journal Article
- Citation:
- IEEE Transactions on Knowledge and Data Engineering, 2019, 31, (12), pp. 2393-2406
- Issue Date:
- 2019-12-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
08496826.pdf | Published version | 3.25 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 1989-2012 IEEE. In multi-view re-ranking, multiple heterogeneous visual features are usually projected onto a low-dimensional subspace, and thus the resulting latent representation can be used for the subsequent similarity-based ranking. Albeit effective, this standard mechanism underplays the intrinsic structure underlying the latent subspace and does not take into account the substantial noise in the original spaces. In this paper, we propose a robust multi-view image re-ranking strategy. Due to the dramatic variability in image visual appearance, it is necessary to uncover the shared components underlying those query-related instances that are visually unlike for improving the re-ranking accuracy. Consequently, it is reasonable to assume the latent subspace enjoys the low-rank property and thus the subspace recovery can be achieved via the low-rank modeling accordingly. In addition, since the real-world data are usually partially contaminated, we employ \ell {2, 1}ℓ2,1-norm based sparsity constraint to appropriately model the sample-specific mapping noise for enhancing the model robustness. In order to produce discriminative representations, we encode a similarity preserving term in our multi-view embedding framework. As a result, the sample separability is maximally maintained in the latent subspace with sufficient discriminative power. The extensive evaluations on public landmark benchmarks demonstrate the efficacy and superiority of the proposed method.
Please use this identifier to cite or link to this item: