Contextclip: Contextual Alignment Of Image-text Pairs On CLIP Visual Representations | Awesome Learning to Hash Add your paper to Learning2Hash

Contextclip: Contextual Alignment Of Image-text Pairs On CLIP Visual Representations

Chanda Grover, Indra Deep Mastan, Debayan Gupta . Proceedings of the Thirteenth Indian Conference on Computer Vision, Graphics and Image Processing 2022 – 4 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation Few Shot & Zero Shot Image Retrieval Multimodal Retrieval

State-of-the-art empirical work has shown that visual representations learned by deep neural networks are robust in nature and capable of performing classification tasks on diverse datasets. For example, CLIP demonstrated zero-shot transfer performance on multiple datasets for classification tasks in a joint embedding space of image and text pairs. However, it showed negative transfer performance on standard datasets, e.g., BirdsNAP, RESISC45, and MNIST. In this paper, we propose ContextCLIP, a contextual and contrastive learning framework for the contextual alignment of image-text pairs by learning robust visual representations on Conceptual Captions dataset. Our framework was observed to improve the image-text alignment by aligning text and image representations contextually in the joint embedding space. ContextCLIP showed good qualitative performance for text-to-image retrieval tasks and enhanced classification accuracy. We evaluated our model quantitatively with zero-shot transfer and fine-tuning experiments on CIFAR-10, CIFAR-100, Birdsnap, RESISC45, and MNIST datasets for classification task.

Similar Work