Fastquery: Communication-efficient Embedding Table Query For Private LLM Inference | Awesome Learning to Hash Add your paper to Learning2Hash

Fastquery: Communication-efficient Embedding Table Query For Private LLM Inference

Chenqi Lin, Tianshi Xu, Zebin Yang, Runsheng Wang, Ru Huang, Meng Li . DAC '24: 61st ACM/IEEE Design Automation Conference 2024 – 0 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Quantization Robustness Tools & Libraries

With the fast evolution of large language models (LLMs), privacy concerns with user queries arise as they may contain sensitive information. Private inference based on homomorphic encryption (HE) has been proposed to protect user query privacy. However, a private embedding table query has to be formulated as a HE-based matrix-vector multiplication problem and suffers from enormous computation and communication overhead. We observe the overhead mainly comes from the neglect of 1) the one-hot nature of user queries and 2) the robustness of the embedding table to low bit-width quantization noise. Hence, in this paper, we propose a private embedding table query optimization framework, dubbed FastQuery. FastQuery features a communication-aware embedding table quantization algorithm and a one-hot-aware dense packing algorithm to simultaneously reduce both the computation and communication costs. Compared to prior-art HE-based frameworks, e.g., Cheetah, Iron, and Bumblebee, FastQuery achieves more than (4.3\times), (2.7\times), (1.3\times) latency reduction, respectively and more than (75.7\times), (60.2\times), (20.2\times) communication reduction, respectively, on both LLAMA-7B and LLAMA-30B.

Similar Work