Adaptive Confidence Multi-view Hashing For Multimedia Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Adaptive Confidence Multi-view Hashing For Multimedia Retrieval

Zhu Jian, Cui Yu, Huang Zhangmin, Li Xingyu, Liu Lei, Zeng Lingfang, Dai Li-rong. Arxiv 2023

[Paper] [Code]    
ARXIV Cross Modal Has Code Independent

The multi-view hash method converts heterogeneous data from multiple views into binary hash codes, which is one of the critical technologies in multimedia retrieval. However, the current methods mainly explore the complementarity among multiple views while lacking confidence learning and fusion. Moreover, in practical application scenarios, the single-view data contain redundant noise. To conduct the confidence learning and eliminate unnecessary noise, we propose a novel Adaptive Confidence Multi-View Hashing (ACMVH) method. First, a confidence network is developed to extract useful information from various single-view features and remove noise information. Furthermore, an adaptive confidence multi-view network is employed to measure the confidence of each view and then fuse multi-view features through a weighted summation. Lastly, a dilation network is designed to further enhance the feature representation of the fused features. To the best of our knowledge, we pioneer the application of confidence learning into the field of multimedia retrieval. Extensive experiments on two public datasets show that the proposed ACMVH performs better than state-of-the-art methods (maximum increase of 3.24%). The source code is available at https://github.com/HackerHyper/ACMVH.

Similar Work