Joint Wasserstein Autoencoders For Aligning Multimodal Embeddings | Awesome Learning to Hash Add your paper to Learning2Hash

Joint Wasserstein Autoencoders For Aligning Multimodal Embeddings

Shweta Mahajan, Teresa Botschen, Iryna Gurevych, Stefan Roth . 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019 – 0 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets ICCV Multimodal Retrieval Supervised

One of the key challenges in learning joint embeddings of multiple modalities, e.g. of images and text, is to ensure coherent cross-modal semantics that generalize across datasets. We propose to address this through joint Gaussian regularization of the latent representations. Building on Wasserstein autoencoders (WAEs) to encode the input in each domain, we enforce the latent embeddings to be similar to a Gaussian prior that is shared across the two domains, ensuring compatible continuity of the encoded semantic representations of images and texts. Semantic alignment is achieved through supervision from matching image-text pairs. To show the benefits of our semi-supervised representation, we apply it to cross-modal retrieval and phrase localization. We not only achieve state-of-the-art accuracy, but significantly better generalization across datasets, owing to the semantic continuity of the latent space.

Similar Work