MiDashengLM: Efficient Audio Understanding with General Audio Captions

๐Ÿ“… 2025-08-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing large audio-language models (LALMs) predominantly rely on closed datasets or proprietary components, limiting their generalizability and accessibility. To address this, we propose an open-training paradigm grounded in universal audio captioning, enabling unified modeling of speech, environmental sounds, and music for cross-modal textual understanding of audio. Methodologically, our approach exclusively leverages open-source resources: the Dasheng audio encoder, the ACAVCaps dataset, and publicly available pretraining and supervised fine-tuning pipelines. The resulting model is fully end-to-end reproducible and achieves substantial inference efficiency gainsโ€”reducing first-token latency by 4ร— and increasing throughput by 20ร—. Our core contribution is the first high-performance, fully open-source LALM that supports unified representation across diverse audio sources, thereby advancing transparent, efficient, and reproducible audio-language research.

Technology Category

Application Category

๐Ÿ“ Abstract
Current approaches for large audio language models (LALMs) often rely on closed data sources or proprietary models, limiting their generalization and accessibility. This paper introduces MiDashengLM, a novel open audio-language model designed for efficient and comprehensive audio understanding through the use of general audio captions using our novel ACAVCaps training dataset. MiDashengLM exclusively relies on publicly available pretraining and supervised fine-tuning (SFT) datasets, ensuring full transparency and reproducibility. At its core, MiDashengLM integrates Dasheng, an open-source audio encoder, specifically engineered to process diverse auditory information effectively. Unlike previous works primarily focused on Automatic Speech Recognition (ASR) based audio-text alignment, our strategy centers on general audio captions, fusing speech, sound and music information into one textual representation, enabling a holistic textual representation of complex audio scenes. Lastly, MiDashengLM provides an up to 4x speedup in terms of time-to-first-token (TTFT) and up to 20x higher throughput than comparable models. Checkpoints are available online at https://huggingface.co/mispeech/midashenglm-7b and https://github.com/xiaomi-research/dasheng-lm.
Problem

Research questions and friction points this paper is trying to address.

Overcoming reliance on closed data sources in audio-language models
Enhancing audio understanding with general captions and open datasets
Improving speed and throughput in audio-text processing efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open audio-language model with general audio captions
Public pretraining and supervised fine-tuning datasets
Integrates Dasheng for diverse auditory processing
๐Ÿ”Ž Similar Papers
No similar papers found.