Understanding BERT Rankers Under Distillation | Awesome Learning to Hash Add your paper to Learning2Hash

Understanding BERT Rankers Under Distillation

Luyu Gao, Zhuyun Dai, Jamie Callan . Proceedings of the 2020 ACM SIGIR on International Conference on Theory of Information Retrieval 2020 – 23 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Efficiency Evaluation SIGIR

Deep language models such as BERT pre-trained on large corpus have given a huge performance boost to the state-of-the-art information retrieval ranking systems. Knowledge embedded in such models allows them to pick up complex matching signals between passages and queries. However, the high computation cost during inference limits their deployment in real-world search scenarios. In this paper, we study if and how the knowledge for search within BERT can be transferred to a smaller ranker through distillation. Our experiments demonstrate that it is crucial to use a proper distillation procedure, which produces up to nine times speedup while preserving the state-of-the-art performance.

Similar Work