🤖 AI Summary
Large audio-language models (LALMs) face a fundamental trade-off between enhanced auditory perception and catastrophic forgetting of linguistic capabilities. Method: This paper proposes DeSTA, a self-generating cross-modal alignment framework wherein a backbone large language model autonomously synthesizes high-quality audio-text alignment data—without task-specific instruction tuning. Leveraging 7,000 hours of diverse, multi-source audio, we construct DeSTA-AQA5M, a 5-million-sample dataset enabling zero-shot robust audio understanding and language generation. Contribution/Results: DeSTA achieves state-of-the-art or leading performance on major benchmarks—including Dynamic-SUPERB, MMAU, and SAKURA—significantly outperforming conventional supervised data construction and training paradigms. To our knowledge, this is the first work to empirically validate both the effectiveness and scalability of LLM-generated alignment data for audio-language modeling, establishing a new paradigm for self-supervised multimodal pretraining.
📝 Abstract
We introduce DeSTA2.5-Audio, a general-purpose Large Audio Language Model (LALM) designed for robust auditory perception and instruction-following, without requiring task-specific audio instruction-tuning. Recent LALMs typically augment Large Language Models (LLMs) with auditory capabilities by training on large-scale, manually curated or LLM-synthesized audio-instruction datasets. However, these approaches have often suffered from the catastrophic forgetting of the LLM's original language abilities. To address this, we revisit the data construction pipeline and propose DeSTA, a self-generated cross-modal alignment strategy in which the backbone LLM generates its own training targets. This approach preserves the LLM's native language proficiency while establishing effective audio-text alignment, thereby enabling zero-shot generalization without task-specific tuning. Using DeSTA, we construct DeSTA-AQA5M, a large-scale, task-agnostic dataset containing 5 million training samples derived from 7,000 hours of audio spanning 50 diverse datasets, including speech, environmental sounds, and music. DeSTA2.5-Audio achieves state-of-the-art or competitive performance across a wide range of audio-language benchmarks, including Dynamic-SUPERB, MMAU, SAKURA, Speech-IFEval, and VoiceBench. Comprehensive comparative studies demonstrate that our self-generated strategy outperforms widely adopted data construction and training strategies in both auditory perception and instruction-following capabilities. Our findings underscore the importance of carefully designed data construction in LALM development and offer practical insights for building robust, general-purpose LALMs.