Unified Multimodal And Multilingual Retrieval Via Multi-task Learning With NLU Integration | Awesome Learning to Hash Add your paper to Learning2Hash

Unified Multimodal And Multilingual Retrieval Via Multi-task Learning With NLU Integration

Xinyuan Zhang, Lina Zhang, Lisung Chen, Guangyao Liu, Shuai Nie, Jiaming Xu, Runyu Shi, Ying Huang, Guoquan Zhang . Arxiv 2026 – 0 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Efficiency Image Retrieval Multimodal Retrieval Text Retrieval Tools & Libraries

Multimodal retrieval systems typically employ Vision Language Models (VLMs) that encode images and text independently into vectors within a shared embedding space. Despite incorporating text encoders, VLMs consistently underperform specialized text models on text-only retrieval tasks. Moreover, introducing additional text encoders increases storage, inference overhead, and exacerbates retrieval inefficiencies, especially in multilingual settings. To address these limitations, we propose a multi-task learning framework that unifies the feature representation across images, long and short texts, and intent-rich queries. To our knowledge, this is the first work to jointly optimize multilingual image retrieval, text retrieval, and natural language understanding (NLU) tasks within a single framework. Our approach integrates image and text retrieval with a shared text encoder that is enhanced by NLU features for intent understanding and retrieval accuracy.

Similar Work