Learning Embeddings For Product Visual Search With Triplet Loss And Online Sampling | Awesome Learning to Hash Add your paper to Learning2Hash

Learning Embeddings For Product Visual Search With Triplet Loss And Online Sampling

Eric Dodds, Huy Nguyen, Simao Herdade, Jack Culpepper, Andrew Kae, Pierre Garrigues . Arxiv 2018 – 1 citation

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Distance Metric Learning Image Retrieval

In this paper, we propose learning an embedding function for content-based image retrieval within the e-commerce domain using the triplet loss and an online sampling method that constructs triplets from within a minibatch. We compare our method to several strong baselines as well as recent works on the DeepFashion and Stanford Online Product datasets. Our approach significantly outperforms the state-of-the-art on the DeepFashion dataset. With a modification to favor sampling minibatches from a single product category, the same approach demonstrates competitive results when compared to the state-of-the-art for the Stanford Online Products dataset.

Similar Work