🤖 AI Summary
This work addresses the challenge of achieving efficient multitask audio understanding under stringent parameter constraints by proposing an end-to-end compact audio language model with only 1.7 billion parameters. The architecture integrates a lightweight language backbone, a Whisper-based audio encoder, and a sparsely activated mixture-of-experts (MoE) adapter to effectively mitigate cross-modal optimization conflicts and handle audio heterogeneity. Furthermore, the study introduces DataFlux, a novel closed-loop pipeline for synthesizing and validating instruction-tuned data, which substantially enhances paralinguistic reasoning capabilities. Despite its compact size, the model matches or surpasses the performance of much larger models ranging from 7B to 30B parameters across diverse tasks—including automatic speech recognition, audio semantic understanding, and dense audio captioning—demonstrating an exceptional balance between performance and computational efficiency.
📝 Abstract
We present Eureka-Audio, a compact yet high-performance audio language model that achieves competitive performance against models that are 4 to 18 times larger across a broad range of audio understanding benchmarks. Despite containing only 1.7B parameters, Eureka-Audio demonstrates strong performance on automatic speech recognition (ASR), audio understanding, and dense audio captioning, matching or surpassing multiple 7B to 30B audio and omni-modal baselines. The model adopts a unified end-to-end architecture composed of a lightweight language backbone, a Whisper-based audio encoder, and a sparsely activated Mixture-of-Experts (MoE) adapter that explicitly accounts for audio heterogeneity and alleviates cross-modal optimization conflicts under limited capacity. To further enhance paralinguistic reasoning, we introduce DataFlux, a closed loop audio instruction data synthesis and verification pipeline that constructs high quality, logically consistent supervision from raw audio. Extensive evaluations across ASR, knowledge reasoning, safety, instruction following, and paralinguistic benchmarks, demonstrate that Eureka-Audio achieves an efficient balance between computational cost and performance. These results establish Eureka Audio as a strong and practical baseline for lightweight audio understanding models.