Query-Guided Spatial-Temporal-Frequency Interaction for Music Audio-Visual Question Answering

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio-visual question answering (AVQA) approaches underutilize audio signals and inadequately leverage textual questions to guide multimodal understanding. This work proposes QSTar, a novel method that deeply integrates question-guided mechanisms into a tripartite interaction across the audio frequency domain, visual spatial domain, and temporal dimension. Inspired by prompt-based learning, QSTar introduces a Query Context Reasoning (QCR) module to precisely attend to semantically relevant features. By combining frequency-domain audio modeling with a tailored multimodal fusion architecture, the proposed approach substantially enhances cross-modal alignment and consistently outperforms current state-of-the-art methods—spanning audio-only, visual-only, and audio-visual QA paradigms—on multiple AVQA benchmarks.

Technology Category

Application Category

📝 Abstract
Audio--Visual Question Answering (AVQA) is a challenging multimodal task that requires jointly reasoning over audio, visual, and textual information in a given video to answer natural language questions. Inspired by recent advances in Video QA, many existing AVQA approaches primarily focus on visual information processing, leveraging pre-trained models to extract object-level and motion-level representations. However, in those methods, the audio input is primarily treated as complementary to video analysis, and the textual question information contributes minimally to audio--visual understanding, as it is typically integrated only in the final stages of reasoning. To address these limitations, we propose a novel Query-guided Spatial--Temporal--Frequency (QSTar) interaction method, which effectively incorporates question-guided clues and exploits the distinctive frequency-domain characteristics of audio signals, alongside spatial and temporal perception, to enhance audio--visual understanding. Furthermore, we introduce a Query Context Reasoning (QCR) block inspired by prompting, which guides the model to focus more precisely on semantically relevant audio and visual features. Extensive experiments conducted on several AVQA benchmarks demonstrate the effectiveness of our proposed method, achieving significant performance improvements over existing Audio QA, Visual QA, Video QA, and AVQA approaches. The code and pretrained models will be released after publication.
Problem

Research questions and friction points this paper is trying to address.

Audio-Visual Question Answering
Multimodal Reasoning
Query-Guided Understanding
Frequency Domain
Spatial-Temporal Perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Query-guided interaction
Spatial-Temporal-Frequency modeling
Audio-Visual Question Answering
Prompt-inspired reasoning
Multimodal fusion
🔎 Similar Papers
No similar papers found.