Do Audio-Visual Large Language Models Really See and Hear?

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates modality bias in audio-visual large language models (AVLLMs), with a particular focus on the suppression of audio semantics under audio-visual conflict. Employing interpretability techniques—including feature probing, cross-modal representation analysis, multi-layer feature tracing, and ablation studies—it presents the first systematic examination of how audio and visual features evolve and fuse within AVLLMs. The findings reveal that although audio semantics are present in intermediate layers, they are significantly suppressed by the visual modality during deep fusion. This visual-dominant bias stems from weakly aligned audio supervision signals during training and is inherited from the underlying vision-language foundation model. The work offers critical insights for improving the fairness and robustness of multimodal large language models.
📝 Abstract
Audio-Visual Large Language Models (AVLLMs) are emerging as unified interfaces to multimodal perception. We present the first mechanistic interpretability study of AVLLMs, analyzing how audio and visual features evolve and fuse through different layers of an AVLLM to produce the final text outputs. We find that although AVLLMs encode rich audio semantics at intermediate layers, these capabilities largely fail to surface in the final text generation when audio conflicts with vision. Probing analyses show that useful latent audio information is present, but deeper fusion layers disproportionately privilege visual representations that tend to suppress audio cues. We further trace this imbalance to training: the AVLLM's audio behavior strongly matches its vision-language base model, indicating limited additional alignment to audio supervision. Our findings reveal a fundamental modality bias in AVLLMs and provide new mechanistic insights into how multimodal LLMs integrate audio and vision.
Problem

Research questions and friction points this paper is trying to address.

Audio-Visual Large Language Models
modality bias
audio-visual fusion
multimodal perception
mechanistic interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Audio-Visual Large Language Models
mechanistic interpretability
modality bias
multimodal fusion
audio-visual alignment
🔎 Similar Papers
No similar papers found.