Aisaq: All-in-storage ANNS With Product Quantization For Dram-free Information Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Aisaq: All-in-storage ANNS With Product Quantization For Dram-free Information Retrieval

Kento Tatsuno, Daisuke Miyashita, Taiga Ikeda, Kiyoshi Ishiyama, Kazunari Sumiyoshi, Jun Deguchi . Arxiv 2024 – 1 citation

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation Graph Based ANN Large Scale Search Quantization

Graph-based approximate nearest neighbor search (ANNS) algorithms work effectively against large-scale vector retrieval. Among such methods, DiskANN achieves good recall-speed tradeoffs using both DRAM and storage. DiskANN adopts product quantization (PQ) to reduce memory usage, which is still proportional to the scale of datasets. In this paper, we propose All-in-Storage ANNS with Product Quantization (AiSAQ), which offloads compressed vectors to the SSD index. Our method achieves (\sim)10 MB memory usage in query search with billion-scale datasets without critical latency degradation. AiSAQ also reduces the index load time for query search preparation, which enables fast switch between muitiple billion-scale indices.This method can be applied to retrievers of retrieval-augmented generation (RAG) and be scaled out with multiple-server systems for emerging datasets. Our DiskANN-based implementation is available on GitHub.

Similar Work