Refining BERT Embeddings For Document Hashing Via Mutual Information Maximization | Awesome Learning to Hash Add your paper to Learning2Hash

Refining BERT Embeddings For Document Hashing Via Mutual Information Maximization

Ou Zijing, Su Qinliang, Yu Jianxing, Zhao Ruihui, Zheng Yefeng, Liu Bang. Arxiv 2021

[Paper]    
ARXIV Unsupervised

Existing unsupervised document hashing methods are mostly established on generative models. Due to the difficulties of capturing long dependency structures, these methods rarely model the raw documents directly, but instead to model the features extracted from them (e.g. bag-of-words (BOW), TFIDF). In this paper, we propose to learn hash codes from BERT embeddings after observing their tremendous successes on downstream tasks. As a first try, we modify existing generative hashing models to accommodate the BERT embeddings. However, little improvement is observed over the codes learned from the old BOW or TFIDF features. We attribute this to the reconstruction requirement in the generative hashing, which will enforce irrelevant information that is abundant in the BERT embeddings also compressed into the codes. To remedy this issue, a new unsupervised hashing paradigm is further proposed based on the mutual information (MI) maximization principle. Specifically, the method first constructs appropriate global and local codes from the documents and then seeks to maximize their mutual information. Experimental results on three benchmark datasets demonstrate that the proposed method is able to generate hash codes that outperform existing ones learned from BOW features by a substantial margin.

Similar Work