Self-supervised Video Hashing Via Bidirectional Transformers | Awesome Learning to Hash Add your paper to Learning2Hash

Self-supervised Video Hashing Via Bidirectional Transformers

Li Shuyan, Li, Lu, Zhou. Arxiv 2024

[Paper]    
ARXIV Supervised Video Retrieval

Most existing unsupervised video hashing methods are built on unidirectional models with less reliable training objectives, which underuse the correlations among frames and the similarity structure between videos. To enable efficient scalable video retrieval, we propose a self-supervised video Hashing method based on Bidirectional Transformers (BTH). Based on the encoder-decoder structure of transformers, we design a visual cloze task to fully exploit the bidirectional correlations between frames. To unveil the similarity structure between unlabeled video data, we further develop a similarity reconstruction task by establishing reliable and effective similarity connections in the video space. Furthermore, we develop a cluster assignment task to exploit the structural statistics of the whole dataset such that more discriminative binary codes can be learned. Extensive experiments implemented on three public benchmark datasets, FCVID, ActivityNet and YFCC, demonstrate the superiority of our proposed approach.

Similar Work