Towards Retrieval Augmented Generation Over Large Video Libraries | Awesome Learning to Hash Add your paper to Learning2Hash

Towards Retrieval Augmented Generation Over Large Video Libraries

Yannis Tevissen, Khalil Guetari, FrΓ©dΓ©ric Petitpont . 2024 16th International Conference on Human System Interaction (HSI) 2024 – 5 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Tools & Libraries

Video content creators need efficient tools to repurpose content, a task that often requires complex manual or automated searches. Crafting a new video from large video libraries remains a challenge. In this paper we introduce the task of Video Library Question Answering (VLQA) through an interoperable architecture that applies Retrieval Augmented Generation (RAG) to video libraries. We propose a system that uses large language models (LLMs) to generate search queries, retrieving relevant video moments indexed by speech and visual metadata. An answer generation module then integrates user queries with this metadata to produce responses with specific video timestamps. This approach shows promise in multimedia content retrieval, and AI-assisted video content creation.

Similar Work