Tevatron 2.0: Unified Document Retrieval Toolkit Across Scale, Language, And Modality | Awesome Learning to Hash Add your paper to Learning2Hash

Tevatron 2.0: Unified Document Retrieval Toolkit Across Scale, Language, And Modality

Xueguang Ma, Luyu Gao, Shengyao Zhuang, Jiaqi Samantha Zhan, Jamie Callan, Jimmy Lin . SIGIR '25: The 48th International ACM SIGIR Conference on Research and Development in Information Retrieval 2025 – 0 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Audio Retrieval Evaluation Large Scale Search Multimodal Retrieval Text Retrieval

Recent advancements in large language models (LLMs) have driven interest in billion-scale retrieval models with strong generalization across retrieval tasks and languages. Additionally, progress in large vision-language models has created new opportunities for multimodal retrieval. In response, we have updated the Tevatron toolkit, introducing a unified pipeline that enables researchers to explore retriever models at different scales, across multiple languages, and with various modalities. This demo paper highlights the toolkit’s key features, bridging academia and industry by supporting efficient training, inference, and evaluation of neural retrievers. We showcase a unified dense retriever achieving strong multilingual and multimodal effectiveness, and conduct a cross-modality zero-shot study to demonstrate its research potential. Alongside, we release OmniEmbed, to the best of our knowledge, the first embedding model that unifies text, image document, video, and audio retrieval, serving as a baseline for future research.

Similar Work