Dwell In The Beginning: How Language Models Embed Long Documents For Dense Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Dwell In The Beginning: How Language Models Embed Long Documents For Dense Retrieval

João Coelho, Bruno Martins, João Magalhães, Jamie Callan, Chenyan Xiong . Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) 2024 – 1 citation

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Text Retrieval

This study investigates the existence of positional biases in Transformer-based models for text representation learning, particularly in the context of web document retrieval. We build on previous research that demonstrated loss of information in the middle of input sequences for causal language models, extending it to the domain of representation learning. We examine positional biases at various stages of training for an encoder-decoder model, including language model pre-training, contrastive pre-training, and contrastive fine-tuning. Experiments with the MS-MARCO document collection reveal that after contrastive pre-training the model already generates embeddings that better capture early contents of the input, with fine-tuning further aggravating this effect.

Similar Work