Bima: Towards Biases Mitigation For Text-video Retrieval Via Scene Element Guidance | Awesome Learning to Hash Add your paper to Learning2Hash

Bima: Towards Biases Mitigation For Text-video Retrieval Via Scene Element Guidance

Huy Le, Nhat Chung, Tung Kieu, Anh Nguyen, Ngan Le . Proceedings of the 33rd ACM International Conference on Multimedia 2025 – 0 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Evaluation Tools & Libraries Video Retrieval

Text-video retrieval (TVR) systems often suffer from visual-linguistic biases present in datasets, which cause pre-trained vision-language models to overlook key details. To address this, we propose BiMa, a novel framework designed to mitigate biases in both visual and textual representations. Our approach begins by generating scene elements that characterize each video by identifying relevant entities/objects and activities. For visual debiasing, we integrate these scene elements into the video embeddings, enhancing them to emphasize fine-grained and salient details. For textual debiasing, we introduce a mechanism to disentangle text features into content and bias components, enabling the model to focus on meaningful content while separately handling biased information. Extensive experiments and ablation studies across five major TVR benchmarks (i.e., MSR-VTT, MSVD, LSMDC, ActivityNet, and DiDeMo) demonstrate the competitive performance of BiMa. Additionally, the model’s bias mitigation capability is consistently validated by its strong results on out-of-distribution retrieval tasks.

Similar Work