Efficient Scaling for LLM-based ASR

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and diminishing performance gains in large language model–driven automatic speech recognition (LLM-ASR), this paper proposes EFIN (Encoder-First Integration), a multi-stage training framework: first, a speech encoder is independently pre-trained; then, it is integrated with a frozen or lightly fine-tuned LLM. This strategy significantly improves scaling efficiency and establishes, for the first time, a quantitative scaling law linking LLM-ASR word error rate (WER) to computational cost (FLOPs). Through systematic ablation studies, EFIN achieves a 21.1% relative reduction in character error rate (CER) on mainstream benchmarks while cutting FLOPs by 49.9%, attaining Pareto-optimal trade-offs between efficiency and accuracy. The core contribution lies in rigorously demonstrating the critical role of encoder pre-training—providing both theoretical foundations and practical design principles for efficient LLM-ASR systems.

Technology Category

Application Category

📝 Abstract
Large language model (LLM)-based automatic speech recognition (ASR) achieves strong performance but often incurs high computational costs. This work investigates how to obtain the best LLM-ASR performance efficiently. Through comprehensive and controlled experiments, we find that pretraining the speech encoder before integrating it with the LLM leads to significantly better scaling efficiency than the standard practice of joint post-training of LLM-ASR. Based on this insight, we propose a new multi-stage LLM-ASR training strategy, EFIN: Encoder First Integration. Among all training strategies evaluated, EFIN consistently delivers better performance (relative to 21.1% CERR) with significantly lower computation budgets (49.9% FLOPs). Furthermore, we derive a scaling law that approximates ASR error rates as a computation function, providing practical guidance for LLM-ASR scaling.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational costs in LLM-based ASR systems
Improving scaling efficiency via pretraining speech encoder
Developing a multi-stage training strategy for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pretrain speech encoder before LLM integration
Propose multi-stage training strategy EFIN
Derive scaling law for ASR error rates
🔎 Similar Papers
No similar papers found.