Univse: Robust Visual Semantic Embeddings Via Structured Semantic Representations | Awesome Learning to Hash Add your paper to Learning2Hash

Univse: Robust Visual Semantic Embeddings Via Structured Semantic Representations

Hao Wu, Jiayuan Mao, Yufeng Zhang, Yuning Jiang, Lei Li, Weiwei Sun, Wei-Ying Ma . Arxiv 2019 – 4 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Multimodal Retrieval Robustness Self-Supervised

We propose Unified Visual-Semantic Embeddings (UniVSE) for learning a joint space of visual and textual concepts. The space unifies the concepts at different levels, including objects, attributes, relations, and full scenes. A contrastive learning approach is proposed for the fine-grained alignment from only image-caption pairs. Moreover, we present an effective approach for enforcing the coverage of semantic components that appear in the sentence. We demonstrate the robustness of Unified VSE in defending text-domain adversarial attacks on cross-modal retrieval tasks. Such robustness also empowers the use of visual cues to resolve word dependencies in novel sentences.

Similar Work