Multitask Text-to-visual Embedding With Titles And Clickthrough Data | Awesome Learning to Hash Add your paper to Learning2Hash

Multitask Text-to-visual Embedding With Titles And Clickthrough Data

Pranav Aggarwal, Zhe Lin, Baldo Faieta, Saeid Motiian . Arxiv 2019 – 2 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Distance Metric Learning Efficiency Image Retrieval

Text-visual (or called semantic-visual) embedding is a central problem in vision-language research. It typically involves mapping of an image and a text description to a common feature space through a CNN image encoder and a RNN language encoder. In this paper, we propose a new method for learning text-visual embedding using both image titles and click-through data from an image search engine. We also propose a new triplet loss function by modeling positive awareness of the embedding, and introduce a novel mini-batch-based hard negative sampling approach for better data efficiency in the learning process. Experimental results show that our proposed method outperforms existing methods, and is also effective for real-world text-to-visual retrieval.

Similar Work