Efficient Non-parametric Estimation Of Multiple Embeddings Per Word In Vector Space | Awesome Learning to Hash Add your paper to Learning2Hash

Efficient Non-parametric Estimation Of Multiple Embeddings Per Word In Vector Space

Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, Andrew McCallum . Arxiv 2015 – 5 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Efficiency Scalability

There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours.

Similar Work