Simple Strategies For Recovering Inner Products From Coarsely Quantized Random Projections | Awesome Learning to Hash Add your paper to Learning2Hash

Simple Strategies For Recovering Inner Products From Coarsely Quantized Random Projections

Ping Li, Martin Slawski. Neural Information Processing Systems 2017

[Paper]    
ICML Independent NEURIPS Quantisation

Random projections have been increasingly adopted for a diverse set of tasks in machine learning involving dimensionality reduction. One specific line of research on this topic has investigated the use of quantization subsequent to projection with the aim of additional data compression. Motivated by applications in nearest neighbor search and linear learning, we revisit the problem of recovering inner products (respectively cosine similarities) in such setting. We show that even under coarse scalar quantization with 3 to 5 bits per projection, the loss in accuracy tends to range from negligible’’ tomoderate’’. One implication is that in most scenarios of practical interest, there is no need for a sophisticated recovery approach like maximum likelihood estimation as considered in previous work on the subject. What we propose herein also yields considerable improvements in terms of accuracy over the Hamming distance-based approach in Li et al. (ICML 2014) which is comparable in terms of simplicity

Similar Work