Videoclip: Contrastive Pre-training For Zero-shot Video-text Understanding | Awesome Learning to Hash Add your paper to Learning2Hash

Videoclip: Contrastive Pre-training For Zero-shot Video-text Understanding

Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer . Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021 – 341 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Evaluation Few Shot & Zero Shot Supervised Video Retrieval

We present VideoCLIP, a contrastive approach to pre-train a unified model for zero-shot video and text understanding, without using any labels on downstream tasks. VideoCLIP trains a transformer for video and text by contrasting temporally overlapping positive video-text pairs with hard negatives from nearest neighbor retrieval. Our experiments on a diverse series of downstream tasks, including sequence-level text-video retrieval, VideoQA, token-level action localization, and action segmentation reveal state-of-the-art performance, surpassing prior work, and in some cases even outperforming supervised approaches. Code is made available at https://github.com/pytorch/fairseq/tree/main/examples/MMPT.

Similar Work