🤖 AI Summary
This work addresses the limitations of current audio language models trained on reasoning-based large language models, which often produce unnatural responses due to their reliance on self-generated textual targets, hindering effective audio-language alignment. To overcome this, we propose a self-rephrasing mechanism that reformulates the model’s auto-generated responses into an audio-understanding format compatible with reasoning architectures, augmented by a compressed multi-audio encoder to enhance representational capacity. Leveraging a large-scale multitask audio-language corpus comprising 6 million samples, we efficiently train a 4B-parameter model. Our approach achieves state-of-the-art open-source performance on MMAU-speech and MMSU benchmarks, demonstrating strong audio reasoning and text capabilities at low computational cost—outperforming not only models of comparable size but also most larger counterparts.
📝 Abstract
Large audio language models (ALMs) extend LLMs with auditory understanding. A common approach freezes the LLM and trains only an adapter on self-generated targets. However, this fails for reasoning LLMs (RLMs) whose built-in chain-of-thought traces expose the textual surrogate input, yielding unnatural responses. We propose self-rephrasing, converting self-generated responses into audio-understanding variants compatible with RLMs while preserving distributional alignment. We further fuse and compress multiple audio encoders for stronger representations. For training, we construct a 6M-instance multi-task corpus (2.5M unique prompts) spanning 19K hours of speech, music, and sound. Our 4B-parameter ALM outperforms similarly sized models and surpasses most larger ALMs on related audio-reasoning benchmarks, while preserving textual capabilities with a low training cost. Notably, we achieve the best open-source result on the MMAU-speech and MMSU benchmarks and rank third among all the models.