Clip-art: Contrastive Pre-training For Fine-grained Art Classification | Awesome Learning to Hash Add your paper to Learning2Hash

Clip-art: Contrastive Pre-training For Fine-grained Art Classification

Marcos V. Conde, Kerem Turgutlu . 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022 – 88 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
CVPR Datasets Evaluation Few Shot & Zero Shot

Existing computer vision research in artwork struggles with artwork’s fine-grained attributes recognition and lack of curated annotated datasets due to their costly creation. To the best of our knowledge, we are one of the first methods to use CLIP (Contrastive Language-Image Pre-Training) to train a neural network on a variety of artwork images and text descriptions pairs. CLIP is able to learn directly from free-form art descriptions, or, if available, curated fine-grained labels. Model’s zero-shot capability allows predicting accurate natural language description for a given image, without directly optimizing for the task. Our approach aims to solve 2 challenges: instance retrieval and fine-grained artwork attribute recognition. We use the iMet Dataset, which we consider the largest annotated artwork dataset. In this benchmark we achieved competitive results using only self-supervision.

Similar Work