Fusion-supervised Deep Cross-modal Hashing | Awesome Learning to Hash Add your paper to Learning2Hash

Fusion-supervised Deep Cross-modal Hashing

Li Wang, Lei Zhu, En Yu, Jiande Sun, Huaxiang Zhang . 2019 IEEE International Conference on Multimedia and Expo (ICME) 2019 – 15 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Hashing Methods Neural Hashing Supervised Unsupervised

Deep hashing has recently received attention in cross-modal retrieval for its impressive advantages. However, existing hashing methods for cross-modal retrieval cannot fully capture the heterogeneous multi-modal correlation and exploit the semantic information. In this paper, we propose a novel Fusion-supervised Deep Cross-modal Hashing (FDCH) approach. Firstly, FDCH learns unified binary codes through a fusion hash network with paired samples as input, which effectively enhances the modeling of the correlation of heterogeneous multi-modal data. Then, these high-quality unified hash codes further supervise the training of the modality-specific hash networks for encoding out-of-sample queries. Meanwhile, both pair-wise similarity information and classification information are embedded in the hash networks under one stream framework, which simultaneously preserves cross-modal similarity and keeps semantic consistency. Experimental results on two benchmark datasets demonstrate the state-of-the-art performance of FDCH.

Similar Work