The Nt-xent Loss Upper Bound | Awesome Learning to Hash Add your paper to Learning2Hash

The Nt-xent Loss Upper Bound

Wilhelm Γ…gren . Arxiv 2022 – 2 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Evaluation Self-Supervised Supervised Tools & Libraries

Self-supervised learning is a growing paradigm in deep representation learning, showing great generalization capabilities and competitive performance in low-labeled data regimes. The SimCLR framework proposes the NT-Xent loss for contrastive representation learning. The objective of the loss function is to maximize agreement, similarity, between sampled positive pairs. This short paper derives and proposes an upper bound for the loss and average similarity. An analysis of the implications is however not provided, but we strongly encourage anyone in the field to conduct this.

Similar Work