OmniVideo-R1: Reinforcing Audio-visual Reasoning with Query Intention and Modality Attention

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited co-perception and holistic reasoning capabilities of existing full-video models in audio-visual multimodal understanding tasks. To overcome this, we propose a query-intent-guided multimodal reasoning framework that innovatively integrates query-dense self-supervised localization with a contrastive learning–based modality attention fusion mechanism. This approach enables tight alignment and collaborative reasoning between audio and visual cues, significantly enhancing cross-modal semantic understanding. Extensive experiments demonstrate that our method consistently outperforms strong baseline models across multiple standard benchmarks, achieving state-of-the-art performance and exhibiting robust generalization capability.

Technology Category

Application Category

📝 Abstract
While humans perceive the world through diverse modalities that operate synergistically to support a holistic understanding of their surroundings, existing omnivideo models still face substantial challenges on audio-visual understanding tasks. In this paper, we propose OmniVideo-R1, a novel reinforced framework that improves mixed-modality reasoning. OmniVideo-R1 empowers models to"think with omnimodal cues"by two key strategies: (1) query-intensive grounding based on self-supervised learning paradigms; and (2) modality-attentive fusion built upon contrastive learning paradigms. Extensive experiments on multiple benchmarks demonstrate that OmniVideo-R1 consistently outperforms strong baselines, highlighting its effectiveness and robust generalization capabilities.
Problem

Research questions and friction points this paper is trying to address.

audio-visual understanding
multimodal reasoning
omnivideo models
modality fusion
cross-modal perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

query-intensive grounding
modality-attentive fusion
audio-visual reasoning
self-supervised learning
contrastive learning