๐ค AI Summary
Existing large audio-language models (LALMs) predominantly rely on closed datasets or proprietary components, limiting their generalizability and accessibility. To address this, we propose an open-training paradigm grounded in universal audio captioning, enabling unified modeling of speech, environmental sounds, and music for cross-modal textual understanding of audio. Methodologically, our approach exclusively leverages open-source resources: the Dasheng audio encoder, the ACAVCaps dataset, and publicly available pretraining and supervised fine-tuning pipelines. The resulting model is fully end-to-end reproducible and achieves substantial inference efficiency gainsโreducing first-token latency by 4ร and increasing throughput by 20ร. Our core contribution is the first high-performance, fully open-source LALM that supports unified representation across diverse audio sources, thereby advancing transparent, efficient, and reproducible audio-language research.
๐ Abstract
Current approaches for large audio language models (LALMs) often rely on closed data sources or proprietary models, limiting their generalization and accessibility. This paper introduces MiDashengLM, a novel open audio-language model designed for efficient and comprehensive audio understanding through the use of general audio captions using our novel ACAVCaps training dataset. MiDashengLM exclusively relies on publicly available pretraining and supervised fine-tuning (SFT) datasets, ensuring full transparency and reproducibility. At its core, MiDashengLM integrates Dasheng, an open-source audio encoder, specifically engineered to process diverse auditory information effectively. Unlike previous works primarily focused on Automatic Speech Recognition (ASR) based audio-text alignment, our strategy centers on general audio captions, fusing speech, sound and music information into one textual representation, enabling a holistic textual representation of complex audio scenes. Lastly, MiDashengLM provides an up to 4x speedup in terms of time-to-first-token (TTFT) and up to 20x higher throughput than comparable models. Checkpoints are available online at https://huggingface.co/mispeech/midashenglm-7b and https://github.com/xiaomi-research/dasheng-lm.