Scene Graph Based Fusion Network For Image-text Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Scene Graph Based Fusion Network For Image-text Retrieval

Guoliang Wang, Yanlei Shang, Yong Chen . 2023 IEEE International Conference on Multimedia and Expo (ICME) 2023 – 3 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Graph Based ANN Text Retrieval

A critical challenge to image-text retrieval is how to learn accurate correspondences between images and texts. Most existing methods mainly focus on coarse-grained correspondences based on co-occurrences of semantic objects, while failing to distinguish the fine-grained local correspondences. In this paper, we propose a novel Scene Graph based Fusion Network (dubbed SGFN), which enhances the images’/texts’ features through intra- and cross-modal fusion for image-text retrieval. To be specific, we design an intra-modal hierarchical attention fusion to incorporate semantic contexts, such as objects, attributes, and relationships, into images’/texts’ feature vectors via scene graphs, and a cross-modal attention fusion to combine the contextual semantics and local fusion via contextual vectors. Extensive experiments on public datasets Flickr30K and MSCOCO show that our SGFN performs better than quite a few SOTA image-text retrieval methods.

Similar Work