Fusion-supervised Deep Cross-modal Hashing | Awesome Learning to Hash Add your paper to Learning2Hash

Fusion-supervised Deep Cross-modal Hashing

Wang Li, Zhu Lei, Yu En, Sun Jiande, Zhang Huaxiang. Arxiv 2019

[Paper]    
ARXIV Cross Modal Supervised

Deep hashing has recently received attention in cross-modal retrieval for its impressive advantages. However, existing hashing methods for cross-modal retrieval cannot fully capture the heterogeneous multi-modal correlation and exploit the semantic information. In this paper, we propose a novel Fusion-supervised Deep Cross-modal Hashing (FDCH) approach. Firstly, FDCH learns unified binary codes through a fusion hash network with paired samples as input, which effectively enhances the modeling of the correlation of heterogeneous multi-modal data. Then, these high-quality unified hash codes further supervise the training of the modality-specific hash networks for encoding out-of-sample queries. Meanwhile, both pair-wise similarity information and classification information are embedded in the hash networks under one stream framework, which simultaneously preserves cross-modal similarity and keeps semantic consistency. Experimental results on two benchmark datasets demonstrate the state-of-the-art performance of FDCH.

Similar Work