Exploiting Clip-based Multi-modal Approach For Artwork Classification And Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Exploiting Clip-based Multi-modal Approach For Artwork Classification And Retrieval

Alberto Baldrati, Marco Bertini, Tiberio Uricchio, Alberto del Bimbo . Communications in Computer and Information Science 2023 – 5 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Few Shot & Zero Shot Supervised Unsupervised

Given the recent advances in multimodal image pretraining where visual models trained with semantically dense textual supervision tend to have better generalization capabilities than those trained using categorical attributes or through unsupervised techniques, in this work we investigate how recent CLIP model can be applied in several tasks in artwork domain. We perform exhaustive experiments on the NoisyArt dataset which is a dataset of artwork images crawled from public resources on the web. On such dataset CLIP achieves impressive results on (zero-shot) classification and promising results in both artwork-to-artwork and description-to-artwork domain.

Similar Work