Embedding Compression With Isotropic Iterative Quantization | Awesome Learning to Hash Add your paper to Learning2Hash

Embedding Compression With Isotropic Iterative Quantization

Siyu Liao, Jie Chen, Yanzhi Wang, Qinru Qiu, Bo Yuan . Proceedings of the AAAI Conference on Artificial Intelligence 2020 – 6 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
AAAI Evaluation Image Retrieval Neural Hashing Quantization

Continuous representation of words is a standard component in deep learning-based NLP models. However, representing a large vocabulary requires significant memory, which can cause problems, particularly on resource-constrained platforms. Therefore, in this paper we propose an isotropic iterative quantization (IIQ) approach for compressing embedding vectors into binary ones, leveraging the iterative quantization technique well established for image retrieval, while satisfying the desired isotropic property of PMI based models. Experiments with pre-trained embeddings (i.e., GloVe and HDC) demonstrate a more than thirty-fold compression ratio with comparable and sometimes even improved performance over the original real-valued embedding vectors.

Similar Work