Self-adaptive Multimodal Retrieval-augmented Generation | Awesome Learning to Hash Add your paper to Learning2Hash

Self-adaptive Multimodal Retrieval-augmented Generation

Wenjia Zhai . Arxiv 2024 – 2 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Evaluation Multimodal Retrieval

Traditional Retrieval-Augmented Generation (RAG) methods are limited by their reliance on a fixed number of retrieved documents, often resulting in incomplete or noisy information that undermines task performance. Although recent adaptive approaches alleviated these problems, their application in intricate and real-world multimodal tasks remains limited. To address these, we propose a new approach called Self-adaptive Multimodal Retrieval-Augmented Generation (SAM-RAG), tailored specifically for multimodal contexts. SAM-RAG not only dynamically filters relevant documents based on the input query, including image captions when needed, but also verifies the quality of both the retrieved documents and the output. Extensive experimental results show that SAM-RAG surpasses existing state-of-the-art methods in both retrieval accuracy and response generation. By further ablation experiments and effectiveness analysis, SAM-RAG maintains high recall quality while improving overall task performance in multimodal RAG task. Our codes are available at https://github.com/SAM-RAG/SAM_RAG.

Similar Work