Unsupervised Multilingual Dense Retrieval Via Generative Pseudo Labeling | Awesome Learning to Hash Add your paper to Learning2Hash

Unsupervised Multilingual Dense Retrieval Via Generative Pseudo Labeling

Chao-Wei Huang, Chen-An Li, Tsu-Yuan Hsu, Chen-Yu Hsu, Yun-Nung Chen . Arxiv 2024 – 1 citation

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation Supervised Tools & Libraries Unsupervised

Dense retrieval methods have demonstrated promising performance in multilingual information retrieval, where queries and documents can be in different languages. However, dense retrievers typically require a substantial amount of paired data, which poses even greater challenges in multilingual scenarios. This paper introduces UMR, an Unsupervised Multilingual dense Retriever trained without any paired data. Our approach leverages the sequence likelihood estimation capabilities of multilingual language models to acquire pseudo labels for training dense retrievers. We propose a two-stage framework which iteratively improves the performance of multilingual dense retrievers. Experimental results on two benchmark datasets show that UMR outperforms supervised baselines, showcasing the potential of training multilingual retrievers without paired data, thereby enhancing their practicality. Our source code, data, and models are publicly available at https://github.com/MiuLab/UMR

Similar Work