LAION-400M Open Dataset Of Clip-filtered 400 Million Image-text Pairs | Awesome Learning to Hash Add your paper to Learning2Hash

LAION-400M Open Dataset Of Clip-filtered 400 Million Image-text Pairs

Schuhmann Christoph, Vencu Richard, Beaumont Romain, Kaczmarczyk Robert, Mullis Clayton, Katta Aarush, Coombes Theo, Jitsev Jenia, Komatsuzaki Aran. Arxiv 2021

[Paper]    
ARXIV Cross Modal

Multi-modal language-vision models trained on hundreds of millions of image-text pairs (e.g. CLIP, DALL-E) gained a recent surge, showing remarkable capability to perform zero- or few-shot learning and transfer even in absence of per-sample labels on target image data. Despite this trend, to date there has been no publicly available datasets of sufficient scale for training such models from scratch. To address this issue, in a community effort we build and release for public LAION-400M, a dataset with CLIP-filtered 400 million image-text pairs, their CLIP embeddings and kNN indices that allow efficient similarity search.

Similar Work