When Scaling Fails: Mitigating Audio Perception Decay of LALMs via Multi-Step Perception-Aware Reasoning

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the degradation of audio perception in large audio language models (LALMs) during multi-step reasoning, which undermines performance on complex tasks. To tackle this issue, the authors propose MPAR², a novel approach that first identifies and quantifies this perceptual decay phenomenon. MPAR² decomposes complex queries into perception-rich subproblems through multi-step perception-driven reasoning and dynamically adapts its strategy via reinforcement learning to align with task complexity. The study introduces the CAFE evaluation framework and employs attention analysis to investigate perceptual dynamics. Experimental results demonstrate that MPAR² significantly mitigates perceptual decay, improving perception accuracy on CAFE from 31.74% to 63.51% and achieving 74.59% accuracy on the MMAU benchmark, thereby overcoming the performance limitations of conventional test-time scaling methods.

Technology Category

Application Category

📝 Abstract
Test-Time Scaling has shown notable efficacy in addressing complex problems through scaling inference compute. However, within Large Audio-Language Models (LALMs), an unintuitive phenomenon exists: post-training models for structured reasoning trajectories results in marginal or even negative gains compared to post-training for direct answering. To investigate it, we introduce CAFE, an evaluation framework designed to precisely quantify audio reasoning errors. Evaluation results reveal LALMs struggle with perception during reasoning and encounter a critical bottleneck: reasoning performance suffers from audio perception decay as reasoning length extends. To address it, we propose MPAR$^2$, a paradigm that encourages dynamic perceptual reasoning and decomposes complex questions into perception-rich sub-problems. Leveraging reinforcement learning, MPAR$^2$ improves perception performance on CAFE from 31.74% to 63.51% and effectively mitigates perception decay, concurrently enhancing reasoning capabilities to achieve a significant 74.59% accuracy on the MMAU benchmark. Further analysis demonstrates that MPAR$^2$ reinforces LALMs to attend to audio input and dynamically adapts reasoning budget to match task complexity.
Problem

Research questions and friction points this paper is trying to address.

audio perception decay
Large Audio-Language Models
multi-step reasoning
perception bottleneck
reasoning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Perception-Aware Reasoning
Audio Perception Decay
Test-Time Scaling
Reinforcement Learning
Large Audio-Language Models
🔎 Similar Papers
No similar papers found.