MCSE: Multimodal Contrastive Learning Of Sentence Embeddings | Awesome Learning to Hash Add your paper to Learning2Hash

MCSE: Multimodal Contrastive Learning Of Sentence Embeddings

Miaoran Zhang, Marius Mosbach, David Ifeoluwa Adelani, Michael A. Hedderich, Dietrich Klakow . Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 2022 – 19 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation NAACL Self-Supervised

Learning semantically meaningful sentence embeddings is an open problem in natural language processing. In this work, we propose a sentence embedding learning approach that exploits both visual and textual information via a multimodal contrastive objective. Through experiments on a variety of semantic textual similarity tasks, we demonstrate that our approach consistently improves the performance across various datasets and pre-trained encoders. In particular, combining a small amount of multimodal data with a large text-only corpus, we improve the state-of-the-art average Spearman’s correlation by 1.7%. By analyzing the properties of the textual embedding space, we show that our model excels in aligning semantically similar sentences, providing an explanation for its improved performance.

Similar Work