Stacmr: Scene-text Aware Cross-modal Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Stacmr: Scene-text Aware Cross-modal Retrieval

Andrés Mafla, Rafael Sampaio de Rezende, Lluís Gómez, Diane Larlus, Dimosthenis Karatzas . 2021 IEEE Winter Conference on Applications of Computer Vision (WACV) 2020 – 2 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Multimodal Retrieval

Recent models for cross-modal retrieval have benefited from an increasingly rich understanding of visual scenes, afforded by scene graphs and object interactions to mention a few. This has resulted in an improved matching between the visual representation of an image and the textual representation of its caption. Yet, current visual representations overlook a key aspect: the text appearing in images, which may contain crucial information for retrieval. In this paper, we first propose a new dataset that allows exploration of cross-modal retrieval where images contain scene-text instances. Then, armed with this dataset, we describe several approaches which leverage scene text, including a better scene-text aware cross-modal retrieval method which uses specialized representations for text from the captions and text from the visual scene, and reconcile them in a common embedding space. Extensive experiments confirm that cross-modal retrieval approaches benefit from scene text and highlight interesting research questions worth exploring further. Dataset and code are available at http://europe.naverlabs.com/stacmr

Similar Work