Enhancing Image-text Matching With Adaptive Feature Aggregation | Awesome Learning to Hash Add your paper to Learning2Hash

Enhancing Image-text Matching With Adaptive Feature Aggregation

Zuhui Wang, Yunting Yin, I. V. Ramakrishnan . ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2024 – 4 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation ICASSP Text Retrieval

Image-text matching aims to find matched cross-modal pairs accurately. While current methods often rely on projecting cross-modal features into a common embedding space, they frequently suffer from imbalanced feature representations across different modalities, leading to unreliable retrieval results. To address these limitations, we introduce a novel Feature Enhancement Module that adaptively aggregates single-modal features for more balanced and robust image-text retrieval. Additionally, we propose a new loss function that overcomes the shortcomings of original triplet ranking loss, thereby significantly improving retrieval performance. The proposed model has been evaluated on two public datasets and achieves competitive retrieval performance when compared with several state-of-the-art models. Implementation codes can be found here.

Similar Work