Recipe1m+: A Dataset For Learning Cross-modal Embeddings For Cooking Recipes And Food Images | Awesome Learning to Hash Add your paper to Learning2Hash

Recipe1m+: A Dataset For Learning Cross-modal Embeddings For Cooking Recipes And Food Images

Javier Marin, Aritro Biswas, Ferda Ofli, Nicholas Hynes, Amaia Salvador, Yusuf Aytar, Ingmar Weber, Antonio Torralba . Arxiv 2018 – 14 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation Scalability

In this paper, we introduce Recipe1M+, a new large-scale, structured corpus of over one million cooking recipes and 13 million food images. As the largest publicly available collection of recipe data, Recipe1M+ affords the ability to train high-capacity modelson aligned, multimodal data. Using these data, we train a neural network to learn a joint embedding of recipes and images that yields impressive results on an image-recipe retrieval task. Moreover, we demonstrate that regularization via the addition of a high-level classification objective both improves retrieval performance to rival that of humans and enables semantic vector arithmetic. We postulate that these embeddings will provide a basis for further exploration of the Recipe1M+ dataset and food and cooking in general. Code, data and models are publicly available.

Similar Work