Eureka-Audio: Triggering Audio Intelligence in Compact Language Models

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving efficient multitask audio understanding under stringent parameter constraints by proposing an end-to-end compact audio language model with only 1.7 billion parameters. The architecture integrates a lightweight language backbone, a Whisper-based audio encoder, and a sparsely activated mixture-of-experts (MoE) adapter to effectively mitigate cross-modal optimization conflicts and handle audio heterogeneity. Furthermore, the study introduces DataFlux, a novel closed-loop pipeline for synthesizing and validating instruction-tuned data, which substantially enhances paralinguistic reasoning capabilities. Despite its compact size, the model matches or surpasses the performance of much larger models ranging from 7B to 30B parameters across diverse tasks—including automatic speech recognition, audio semantic understanding, and dense audio captioning—demonstrating an exceptional balance between performance and computational efficiency.

Technology Category

Application Category

📝 Abstract
We present Eureka-Audio, a compact yet high-performance audio language model that achieves competitive performance against models that are 4 to 18 times larger across a broad range of audio understanding benchmarks. Despite containing only 1.7B parameters, Eureka-Audio demonstrates strong performance on automatic speech recognition (ASR), audio understanding, and dense audio captioning, matching or surpassing multiple 7B to 30B audio and omni-modal baselines. The model adopts a unified end-to-end architecture composed of a lightweight language backbone, a Whisper-based audio encoder, and a sparsely activated Mixture-of-Experts (MoE) adapter that explicitly accounts for audio heterogeneity and alleviates cross-modal optimization conflicts under limited capacity. To further enhance paralinguistic reasoning, we introduce DataFlux, a closed loop audio instruction data synthesis and verification pipeline that constructs high quality, logically consistent supervision from raw audio. Extensive evaluations across ASR, knowledge reasoning, safety, instruction following, and paralinguistic benchmarks, demonstrate that Eureka-Audio achieves an efficient balance between computational cost and performance. These results establish Eureka Audio as a strong and practical baseline for lightweight audio understanding models.
Problem

Research questions and friction points this paper is trying to address.

compact language models
audio intelligence
audio understanding
model efficiency
paralinguistic reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts (MoE)
end-to-end audio language model
DataFlux
compact audio understanding
paralinguistic reasoning
🔎 Similar Papers
No similar papers found.
D
Dan Zhang
Baidu Inc.
Y
Yishu Lei
Baidu Inc.
Jing Hu
Jing Hu
Associate professor, School of Computer Science and Engineering, Xi'an University of Technology
hyperspectral image processing
S
Shuwei He
Baidu Inc.; College of Computer Science, Inner Mongolia University
S
Songhe Deng
Baidu Inc.
X
Xianlong Luo
Baidu Inc.
D
Danxiang Zhu
Baidu Inc.
Shikun Feng
Shikun Feng
Baidu
nlp
R
Rui Liu
College of Computer Science, Inner Mongolia University
J
Jingzhou He
Baidu Inc.
Yu Sun
Yu Sun
Baidu
Natural Language ProcessingDeep Learning
H
Hua Wu
Baidu Inc.
Haifeng Wang
Haifeng Wang
Baidu
NLPMTSearchSpeechData Mining