Approximate Search With Quantized Sparse Representations | Awesome Learning to Hash Add your paper to Learning2Hash

Approximate Search With Quantized Sparse Representations

Himalaya Jain, Patrick Pérez, Rémi Gribonval, Joaquin Zepeda, Hervé Jégou . Lecture Notes in Computer Science 2016 – 17 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Evaluation Memory Efficiency Quantization

This paper tackles the task of storing a large collection of vectors, such as visual descriptors, and of searching in it. To this end, we propose to approximate database vectors by constrained sparse coding, where possible atom weights are restricted to belong to a finite subset. This formulation encompasses, as particular cases, previous state-of-the-art methods such as product or residual quantization. As opposed to traditional sparse coding methods, quantized sparse coding includes memory usage as a design constraint, thereby allowing us to index a large collection such as the BIGANN billion-sized benchmark. Our experiments, carried out on standard benchmarks, show that our formulation leads to competitive solutions when considering different trade-offs between learning/coding time, index size and search quality.

Similar Work