Simlm: Pre-training With Representation Bottleneck For Dense Passage Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Simlm: Pre-training With Representation Bottleneck For Dense Passage Retrieval

Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei . Arxiv 2022 – 6 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Efficiency Memory Efficiency Self-Supervised Supervised

In this paper, we propose SimLM (Similarity matching with Language Model pre-training), a simple yet effective pre-training method for dense passage retrieval. It employs a simple bottleneck architecture that learns to compress the passage information into a dense vector through self-supervised pre-training. We use a replaced language modeling objective, which is inspired by ELECTRA, to improve the sample efficiency and reduce the mismatch of the input distribution between pre-training and fine-tuning. SimLM only requires access to unlabeled corpus, and is more broadly applicable when there are no labeled data or queries. We conduct experiments on several large-scale passage retrieval datasets, and show substantial improvements over strong baselines under various settings. Remarkably, SimLM even outperforms multi-vector approaches such as ColBERTv2 which incurs significantly more storage cost. Our code and model check points are available at https://github.com/microsoft/unilm/tree/master/simlm .

Similar Work