Learning To Embed Semantic Similarity For Joint Image-text Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Learning To Embed Semantic Similarity For Joint Image-text Retrieval

Noam Malali, Yosi Keller . IEEE Transactions on Pattern Analysis and Machine Intelligence 2022 – 8 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Distance Metric Learning Neural Hashing Quantization Text Retrieval

We present a deep learning approach for learning the joint semantic embeddings of images and captions in a Euclidean space, such that the semantic similarity is approximated by the L2 distances in the embedding space. For that, we introduce a metric learning scheme that utilizes multitask learning to learn the embedding of identical semantic concepts using a center loss. By introducing a differentiable quantization scheme into the end-to-end trainable network, we derive a semantic embedding of semantically similar concepts in Euclidean space. We also propose a novel metric learning formulation using an adaptive margin hinge loss, that is refined during the training phase. The proposed scheme was applied to the MS-COCO, Flicke30K and Flickr8K datasets, and was shown to compare favorably with contemporary state-of-the-art approaches.

Similar Work