Going Beyond T-SNE: Exposing \texttt{whatlies} In Text Embeddings | Awesome Learning to Hash Add your paper to Learning2Hash

Going Beyond T-SNE: Exposing \texttt{whatlies} In Text Embeddings

Vincent D. Warmerdam, Thomas Kober, Rachael Tatman . Arxiv 2020 – 0 citations

[Other] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Tools & Libraries

We introduce whatlies, an open source toolkit for visually inspecting word and sentence embeddings. The project offers a unified and extensible API with current support for a range of popular embedding backends including spaCy, tfhub, huggingface transformers, gensim, fastText and BytePair embeddings. The package combines a domain specific language for vector arithmetic with visualisation tools that make exploring word embeddings more intuitive and concise. It offers support for many popular dimensionality reduction techniques as well as many interactive visualisations that can either be statically exported or shared via Jupyter notebooks. The project documentation is available from https://rasahq.github.io/whatlies/.

Similar Work