🤖 AI Summary
Current audio-language models predominantly adopt autoregressive paradigms, struggling to balance generation quality and inference efficiency; diffusion models remain unexplored for speech understanding. This paper introduces DIFFA—the first diffusion-based large-scale audio-language model tailored for spoken language understanding. Methodologically, DIFFA (1) freezes a pretrained diffusion language model and employs lightweight dual adapters for audio–text cross-modal alignment; (2) proposes a two-stage training paradigm leveraging both ASR alignment supervision and synthetic audio-caption instruction data generated by large language models; and (3) supports bidirectional context modeling and controllable semantic-speech generation. Trained on only 960 hours of real ASR data and 127 hours of synthetic data, DIFFA significantly outperforms leading open-source autoregressive models on MMSU, MMAU, and VoiceBench benchmarks. Results demonstrate the efficacy and scalability of diffusion mechanisms for efficient, high-performance spoken language understanding.
📝 Abstract
Recent advances in Large language models (LLMs) have shown remarkable capabilities across textual and multimodal domains. In parallel, diffusion-based language models have emerged as a promising alternative to the autoregressive paradigm, offering improved controllability, bidirectional context modeling, and robust generation. However, their application to the audio modality remains underexplored. In this work, we introduce extbf{DIFFA}, the first diffusion-based Large Audio-Language Model designed to perform spoken language understanding. DIFFA integrates a frozen diffusion language model with a lightweight dual-adapter architecture that bridges speech understanding and natural language reasoning. We employ a two-stage training pipeline: first, aligning semantic representations via an ASR objective; then, learning instruction-following abilities through synthetic audio-caption pairs automatically generated by prompting LLMs. Despite being trained on only 960 hours of ASR and 127 hours of synthetic instruction data, DIFFA demonstrates competitive performance on major benchmarks, including MMSU, MMAU, and VoiceBench, outperforming several autoregressive open-source baselines. Our results reveal the potential of diffusion-based language models for efficient and scalable audio understanding, opening a new direction for speech-driven AI. Our code will be available at https://github.com/NKU-HLT/DIFFA.git.