Neighborretr: Balancing Hub Centrality In Cross-modal Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Neighborretr: Balancing Hub Centrality In Cross-modal Retrieval

Zengrong Lin, Zheng Wang, Tianwen Qian, Pan Mu, Sixian Chan, Cong Bai . 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025 – 0 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
CVPR Evaluation Multimodal Retrieval

Cross-modal retrieval aims to bridge the semantic gap between different modalities, such as visual and textual data, enabling accurate retrieval across them. Despite significant advancements with models like CLIP that align cross-modal representations, a persistent challenge remains: the hubness problem, where a small subset of samples (hubs) dominate as nearest neighbors, leading to biased representations and degraded retrieval accuracy. Existing methods often mitigate hubness through post-hoc normalization techniques, relying on prior data distributions that may not be practical in real-world scenarios. In this paper, we directly mitigate hubness during training and introduce NeighborRetr, a novel method that effectively balances the learning of hubs and adaptively adjusts the relations of various kinds of neighbors. Our approach not only mitigates the hubness problem but also enhances retrieval performance, achieving state-of-the-art results on multiple cross-modal retrieval benchmarks. Furthermore, NeighborRetr demonstrates robust generalization to new domains with substantial distribution shifts, highlighting its effectiveness in real-world applications. We make our code publicly available at: https://github.com/zzezze/NeighborRetr .

Similar Work