🤖 AI Summary
Existing audio-visual question answering (AVQA) methods for music overlook instrument-specific characteristics, rhythmic structure, and strong temporal coupling, limiting fine-grained question answering performance. To address this, we propose a music-aware multimodal interaction backbone that jointly models rhythm-aware audio representations, instrument-aware visual disentanglement, cross-modal temporal alignment, and music source separation as auxiliary supervision. We construct and publicly release Music AVQA+, the first AVQA dataset with fine-grained rhythmic annotations and sound-source localization labels. Furthermore, we introduce a time-alignment mechanism to enable precise temporal modeling of musical semantics. Evaluated on the Music AVQA benchmark, our approach achieves state-of-the-art performance, yielding significant improvements in rhythm recognition (+8.2%) and instrument localization (+11.7%) accuracy. Both the codebase and the Music AVQA+ dataset are open-sourced.
📝 Abstract
Music performances are representative scenarios for audio-visual modeling. Unlike common scenarios with sparse audio, music performances continuously involve dense audio signals throughout. While existing multimodal learning methods on the audio-video QA demonstrate impressive capabilities in general scenarios, they are incapable of dealing with fundamental problems within the music performances: they underexplore the interaction between the multimodal signals in performance and fail to consider the distinctive characteristics of instruments and music. Therefore, existing methods tend to answer questions regarding musical performances inaccurately. To bridge the above research gaps, (i) given the intricate multimodal interconnectivity inherent to music data, our primary backbone is designed to incorporate multimodal interactions within the context of music; (ii) to enable the model to learn music characteristics, we annotate and release rhythmic and music sources in the current music datasets; (iii) for time-aware audio-visual modeling, we align the model's music predictions with the temporal dimension. Our experiments show state-of-the-art effects on the Music AVQA datasets. Our code is available at https://github.com/xid32/Amuse.