Cross-modal Contrastive Learning With Asymmetric Co-attention Network For Video Moment Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Cross-modal Contrastive Learning With Asymmetric Co-attention Network For Video Moment Retrieval

Love Panta, Prashant Shrestha, Brabeem Sapkota, Amrita Bhattarai, Suresh Manandhar, Anand Kumar Sah . Arxiv 2023 – 1 citation

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Distance Metric Learning Evaluation Self-Supervised

Video moment retrieval is a challenging task requiring fine-grained interactions between video and text modalities. Recent work in image-text pretraining has demonstrated that most existing pretrained models suffer from information asymmetry due to the difference in length between visual and textual sequences. We question whether the same problem also exists in the video-text domain with an auxiliary need to preserve both spatial and temporal information. Thus, we evaluate a recently proposed solution involving the addition of an asymmetric co-attention network for video grounding tasks. Additionally, we incorporate momentum contrastive loss for robust, discriminative representation learning in both modalities. We note that the integration of these supplementary modules yields better performance compared to state-of-the-art models on the TACoS dataset and comparable results on ActivityNet Captions, all while utilizing significantly fewer parameters with respect to baseline.

Similar Work