BERT-LSH Reducing Absolute Compute For Attention | Awesome Learning to Hash Add your paper to Learning2Hash

BERT-LSH Reducing Absolute Compute For Attention

Li Zezheng, Yip Kingston. Arxiv 2024

[Paper] [Code]    
ARXIV Has Code Independent LSH

This study introduces a novel BERT-LSH model that incorporates Locality Sensitive Hashing (LSH) to approximate the attention mechanism in the BERT architecture. We examine the computational efficiency and performance of this model compared to a standard baseline BERT model. Our findings reveal that BERT-LSH significantly reduces computational demand for the self-attention layer while unexpectedly outperforming the baseline model in pretraining and fine-tuning tasks. These results suggest that the LSH-based attention mechanism not only offers computational advantages but also may enhance the model’s ability to generalize from its training data. For more information, visit our GitHub repository: https://github.com/leo4life2/algoml-final

Similar Work