Cross-modal Retrieval For Knowledge-based Visual Question Answering | Awesome Learning to Hash Add your paper to Learning2Hash

Cross-modal Retrieval For Knowledge-based Visual Question Answering

Paul Lerner, Olivier Ferret, Camille Guinaudeau . Lecture Notes in Computer Science 2024 – 8 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Multimodal Retrieval

Knowledge-based Visual Question Answering about Named Entities is a challenging task that requires retrieving information from a multimodal Knowledge Base. Named entities have diverse visual representations and are therefore difficult to recognize. We argue that cross-modal retrieval may help bridge the semantic gap between an entity and its depictions, and is foremost complementary with mono-modal retrieval. We provide empirical evidence through experiments with a multimodal dual encoder, namely CLIP, on the recent ViQuAE, InfoSeek, and Encyclopedic-VQA datasets. Additionally, we study three different strategies to fine-tune such a model: mono-modal, cross-modal, or joint training. Our method, which combines mono-and cross-modal retrieval, is competitive with billion-parameter models on the three datasets, while being conceptually simpler and computationally cheaper.

Similar Work