🤖 AI Summary
Current multimodal large models face bottlenecks in fine-grained audiovisual understanding and cross-modal alignment. To address this, we propose an audio-driven active perception agent that shifts from passive comprehension to proactive, task-oriented cross-modal interrogation—enabling dynamic querying beyond static frame description. Our method introduces a “coarse-to-fine” audio-guided perception framework and a dynamic tool orchestration mechanism integrating audio event localization, multi-stage attention focusing, and audiovisual collaborative reasoning. This design achieves modality-adaptive alignment and precise spatiotemporal event localization. Evaluated on three major audiovisual understanding benchmarks, our approach achieves state-of-the-art performance, outperforming the best open-source and closed-source models by 10–20% in accuracy. The results demonstrate substantial improvements in fine-grained, joint audiovisual reasoning capability.
📝 Abstract
Omnimodal large language models have made significant strides in unifying audio and visual modalities; however, they often lack the fine-grained cross-modal understanding and have difficulty with multimodal alignment. To address these limitations, we introduce OmniAgent, a fully audio-guided active perception agent that dynamically orchestrates specialized tools to achieve more fine-grained audio-visual reasoning. Unlike previous works that rely on rigid, static workflows and dense frame-captioning, this paper demonstrates a paradigm shift from passive response generation to active multimodal inquiry. OmniAgent employs dynamic planning to autonomously orchestrate tool invocation on demand, strategically concentrating perceptual attention on task-relevant cues. Central to our approach is a novel coarse-to-fine audio-guided perception paradigm, which leverages audio cues to localize temporal events and guide subsequent reasoning. Extensive empirical evaluations on three audio-video understanding benchmarks demonstrate that OmniAgent achieves state-of-the-art performance, surpassing leading open-source and proprietary models by substantial margins of 10% - 20% accuracy.