Jina-embeddings-v4: Universal Embeddings For Multimodal Multilingual Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Jina-embeddings-v4: Universal Embeddings For Multimodal Multilingual Retrieval

Michael Günther, Saba Sturua, Mohammad Kalim Akram, Isabelle Mohr, Andrei Ungureanu, Bo Wang, Sedigheh Eslami, Scott Martens, Maximilian Werk, Nan Wang, Han Xiao . Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025) 2025 – 0 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Evaluation Image Retrieval Multimodal Retrieval Text Retrieval

We introduce jina-embeddings-v4, a 3.8 billion parameter multimodal embedding model that unifies text and image representations through a novel architecture supporting both single-vector and multi-vector embeddings in the late interaction style. The model incorporates task-specific Low-Rank Adaptation (LoRA) adapters to optimize performance across diverse retrieval scenarios, including query-document retrieval, semantic text similarity, and code search. Comprehensive evaluations demonstrate that jina-embeddings-v4 achieves state-of-the-art performance on both single-modal and cross-modal retrieval tasks, with particular strength in processing visually rich content such as tables, charts, diagrams, and mixed-media formats. To facilitate evaluation of this capability, we also introduce Jina-VDR, a novel benchmark specifically designed for visually rich image retrieval.

Similar Work