Binary Embedding: Fundamental Limits And Fast Algorithm | Awesome Learning to Hash Add your paper to Learning2Hash

Binary Embedding: Fundamental Limits And Fast Algorithm

Xinyang Yi, Constantine Caramanis, Eric Price . Arxiv 2015 – 17 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Hashing Methods

Binary embedding is a nonlinear dimension reduction methodology where high dimensional data are embedded into the Hamming cube while preserving the structure of the original space. Specifically, for an arbitrary (N) distinct points in (\mathbb{S}^{p-1}), our goal is to encode each point using (m)-dimensional binary strings such that we can reconstruct their geodesic distance up to (\delta) uniform distortion. Existing binary embedding algorithms either lack theoretical guarantees or suffer from running time (O\big(mp\big)). We make three contributions: (1) we establish a lower bound that shows any binary embedding oblivious to the set of points requires (m = Ω(\frac{1}{\delta^2}log{N})) bits and a similar lower bound for non-oblivious embeddings into Hamming distance; (2) [DELETED, see comment]; (3) we also provide an analytic result about embedding a general set of points (K \subseteq \mathbb{S}^{p-1}) with even infinite size. Our theoretical findings are supported through experiments on both synthetic and real data sets.

Similar Work