Embedding Geometries Of Contrastive Language-image Pre-training | Awesome Learning to Hash Add your paper to Learning2Hash

Embedding Geometries Of Contrastive Language-image Pre-training

Jason Chuan-Chih Chou, Nahid Alam . Arxiv 2024 – 0 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Distance Metric Learning Evaluation

Since the publication of CLIP, the approach of using InfoNCE loss for contrastive pre-training has become widely popular for bridging two or more modalities. Despite its wide adoption, CLIP’s original design choices of L2 normalization and cosine similarity logit have rarely been revisited. We have systematically experimented with alternative geometries and softmax logits for language-image pre-training and identified that variants with intuitive Euclidean geometry, Euclidean CLIP (EuCLIP), match or exceed the performance of CLIP and support hierarchical relationships at least as well as more complicated hyperbolic alternative.

Similar Work