Multimodal Similarity-preserving Hashing | Awesome Learning to Hash Add your paper to Learning2Hash

Multimodal Similarity-preserving Hashing

Masci Jonathan, Bronstein Michael M., Bronstein Alexander A., Schmidhuber Jürgen. Arxiv 2012

[Paper]    
ARXIV Cross Modal Supervised

We introduce an efficient computational framework for hashing data belonging to multiple modalities into a single representation space where they become mutually comparable. The proposed approach is based on a novel coupled siamese neural network architecture and allows unified treatment of intra- and inter-modality similarity learning. Unlike existing cross-modality similarity learning approaches, our hashing functions are not limited to binarized linear projections and can assume arbitrarily complex forms. We show experimentally that our method significantly outperforms state-of-the-art hashing approaches on multimedia retrieval tasks.

Similar Work