Conditional Negative Sampling For Contrastive Learning Of Visual Representations | Awesome Learning to Hash Add your paper to Learning2Hash

Conditional Negative Sampling For Contrastive Learning Of Visual Representations

Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman . Arxiv 2020 – 26 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation Self-Supervised Supervised Unsupervised

Recent methods for learning unsupervised visual representations, dubbed contrastive learning, optimize the noise-contrastive estimation (NCE) bound on mutual information between two views of an image. NCE uses randomly sampled negative examples to normalize the objective. In this paper, we show that choosing difficult negatives, or those more similar to the current instance, can yield stronger representations. To do this, we introduce a family of mutual information estimators that sample negatives conditionally – in a “ring” around each positive. We prove that these estimators lower-bound mutual information, with higher bias but lower variance than NCE. Experimentally, we find our approach, applied on top of existing models (IR, CMC, and MoCo) improves accuracy by 2-5% points in each case, measured by linear evaluation on four standard image datasets. Moreover, we find continued benefits when transferring features to a variety of new image distributions from the Meta-Dataset collection and to a variety of downstream tasks such as object detection, instance segmentation, and keypoint detection.

Similar Work