QMAVIS: Long Video-Audio Understanding using Fusion of Large Multimodal Models

📅 2026-01-10
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video-audio multimodal models struggle to effectively capture temporal semantics and fine-grained audio details in long-form videos spanning minutes to hours. This work proposes a novel long-form video-audio understanding framework based on late fusion with large models, introducing late fusion strategies to this task for the first time. By synergistically integrating large multimodal models (LMMs), large language models (LLMs), and automatic speech recognition (ASR) systems, the framework jointly parses narrative structures and scene-level details. It overcomes conventional limitations in video duration and semantic coherence, achieving a 38.75% performance gain over VideoLlaMA2 and InternVL2 on VideoMME (with subtitles) and up to a 2% improvement on challenging benchmarks such as PerceptionTest and EgoSchema. These advances significantly enhance long-video perception capabilities and broaden their applicability in domains like embodied intelligence.

Technology Category

Application Category

📝 Abstract
Large Multimodal Models (LMMs) for video-audio understanding have traditionally been evaluated only on shorter videos of a few minutes long. In this paper, we introduce QMAVIS (Q Team-Multimodal Audio Video Intelligent Sensemaking), a novel long video-audio understanding pipeline built through a late fusion of LMMs, Large Language Models, and speech recognition models. QMAVIS addresses the gap in long-form video analytics, particularly for longer videos of a few minutes to beyond an hour long, opening up new potential applica- tions in sensemaking, video content analysis, embodied AI, etc. Quantitative experiments using QMAVIS demonstrated a 38.75% improvement over state-of-the-art video-audio LMMs like Vide- oLlaMA2 and InternVL2 on the VideoMME (with subtitles) dataset, which comprises long videos with audio information. Evaluations on other challenging video understanding datasets like PerceptionTest and EgoSchema saw up to 2% improvement, indicating competitive performance. Qualitative experiments also showed that QMAVIS is able to extract the nuances of different scenes in a long video audio content while understanding the overarching narrative. Ablation studies were also conducted to ascertain the impact of each component in the fusion pipeline.
Problem

Research questions and friction points this paper is trying to address.

long video-audio understanding
large multimodal models
video content analysis
sensemaking
embodied AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

long video-audio understanding
late fusion
large multimodal models
video content analysis
speech recognition integration
🔎 Similar Papers
2024-02-20International Conference on Machine LearningCitations: 30
Z
Zixing Lin
Q Team, Home Team Science and Technology Agency (HTX), Singapore
Jiale Wang
Jiale Wang
HKUST, BUPT
Medical robots
G
Gee Wah Ng
Q Team, Home Team Science and Technology Agency (HTX), Singapore
L
L. Mak
Q Team, Home Team Science and Technology Agency (HTX), Singapore
C
Chan Zhi Yang Jeriel
National University of Singapore
J
Jun Yang Lee
Nanyang Technological University, Singapore
Y
Yaohao Li
Nanyang Technological University, Singapore