Dresd: Dense Retrieval For Speculative Decoding | Awesome Learning to Hash Add your paper to Learning2Hash

Dresd: Dense Retrieval For Speculative Decoding

Milan Gritta, Huiyin Xue, Gerasimos Lampouras . Findings of the Association for Computational Linguistics: ACL 2025 2025 – 0 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Scalability Similarity Search Tools & Libraries

Speculative decoding (SD) accelerates Large Language Model (LLM) generation by using an efficient draft model to propose the next few tokens, which are verified by the LLM in a single forward call, reducing latency while preserving its outputs. We focus on retrieval-based SD where the draft model retrieves the next tokens from a non-parametric datastore. Sparse retrieval (REST), which operates on the surface form of strings, is currently the dominant paradigm due to its simplicity and scalability. However, its effectiveness is limited due to the usage of short contexts and exact string matching. Instead, we introduce Dense Retrieval for Speculative Decoding (DReSD), a novel framework that uses approximate nearest neighbour search with contextualised token embeddings to retrieve the most semantically relevant token sequences for SD. Extensive experiments show that DReSD achieves (on average) 87% higher acceptance rates, 65% longer accepted tokens and 19% faster generation speeds compared to sparse retrieval (REST).

Similar Work