Cross-modal Retrieval Augmentation For Multi-modal Classification | Awesome Learning to Hash Add your paper to Learning2Hash

Cross-modal Retrieval Augmentation For Multi-modal Classification

Shir Gur, Natalia Neverova, Chris Stauffer, Ser-Nam Lim, Douwe Kiela, Austin Reiter . Findings of the Association for Computational Linguistics: EMNLP 2021 2021 – 26 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Evaluation Multimodal Retrieval

Recent advances in using retrieval components over external knowledge sources have shown impressive results for a variety of downstream tasks in natural language processing. Here, we explore the use of unstructured external knowledge sources of images and their corresponding captions for improving visual question answering (VQA). First, we train a novel alignment model for embedding images and captions in the same space, which achieves substantial improvement in performance on image-caption retrieval w.r.t. similar methods. Second, we show that retrieval-augmented multi-modal transformers using the trained alignment model improve results on VQA over strong baselines. We further conduct extensive experiments to establish the promise of this approach, and examine novel applications for inference time such as hot-swapping indices.

Similar Work