Deep Discrete Cross-Modal Hashing with Multiple Supervision

Publisher:
ELSEVIER
Publication Type:
Journal Article
Citation:
Neurocomputing, 2022, 486, pp. 215-224
Issue Date:
2022-05-14
Filename Description Size
Deep Discrete Cross-Modal Hashing with Multiple Supervision.pdfPublished version1.71 MB
Adobe PDF
Full metadata record
Deep hashing has been widely used for large-scale cross-modal retrieval benefited from the low storage cost and fast search speed. However, most existing deep supervised methods only preserve the instance-pairwise relationship supervised by the semantic similarity matrix, which always inufficient heterogeneous correlation. Thus, we propose the Deep Discrete Cross-Modal Hashing with Multiple Supervision (DDCHms) to further enhance the semantic consistency of heterogeneous modalities. It improves the performance of semantic information retrieval with the joint supervision of instance-pairwise, instance-labeled and class-wise similarities. Specifically, we firstly utilize the instance-pairwise similarity matrix to supervise the learning process of heterogeneous networks and it keeps the pairwise correlation from the perspective of instance-instance. Specially, we design a semantic network to fully exploit the semantic information implicated in labels, which is also used to supervise multi-modal networks on instance-label level. Furthermore, we propose the class-wise hash codes to cooperate with the intrinsic label matrix as the prototypes, and it guides the hash learning and further ensures the precision and compactness of the learned hash codes. In addition, we design different discrete optimization strategies to optimize the class-wise hash codes and unified hash codes, respectively. That avoids the optimization errors and ensures the high-quality of learned hash codes. Experiments on three popular datasets indicate that our method outperforms other state-of-the-art methods in terms of cross-modal retrieval.
Please use this identifier to cite or link to this item: