Central Similarity Multi-view Hashing For Multimedia Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Central Similarity Multi-view Hashing For Multimedia Retrieval

Zhu Jian, Cheng Wen, Cui Yu, Tang Chang, Dai Yuyang, Li Yong, Zeng Lingfang. Arxiv 2023

[Paper]    
ARXIV Cross Modal

Hash representation learning of multi-view heterogeneous data is the key to improving the accuracy of multimedia retrieval. However, existing methods utilize local similarity and fall short of deeply fusing the multi-view features, resulting in poor retrieval accuracy. Current methods only use local similarity to train their model. These methods ignore global similarity. Furthermore, most recent works fuse the multi-view features via a weighted sum or concatenation. We contend that these fusion methods are insufficient for capturing the interaction between various views. We present a novel Central Similarity Multi-View Hashing (CSMVH) method to address the mentioned problems. Central similarity learning is used for solving the local similarity problem, which can utilize the global similarity between the hash center and samples. We present copious empirical data demonstrating the superiority of gate-based fusion over conventional approaches. On the MS COCO and NUS-WIDE, the proposed CSMVH performs better than the state-of-the-art methods by a large margin (up to 11.41% mean Average Precision (mAP) improvement).

Similar Work