Cross-Modal Deep Variational Hashing | Awesome Learning to Hash Add your paper to Learning2Hash

Cross-Modal Deep Variational Hashing

Venice Erin Liong, Jiwen Lu, Yap-Peng Tan, and Jie Zhou. ICCV 2017

ICCV Cross-Modal Deep Learning

In this paper, we propose a cross-modal deep variational hashing (CMDVH) method for cross-modality multimedia retrieval. Unlike existing cross-modal hashing methods which learn a single pair of projections to map each example as a binary vector, we design a couple of deep neural network to learn non-linear transformations from imagetext input pairs, so that unified binary codes can be obtained. We then design the modality-specific neural networks in a probabilistic manner where we model a latent variable as close as possible from the inferred binary codes, which is approximated by a posterior distribution regularized by a known prior. Experimental results on three benchmark datasets show the efficacy of the proposed approach.