SpecASR: Accelerating LLM-based Automatic Speech Recognition via Speculative Decoding

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high decoding latency of large language models (LLMs) in automatic speech recognition (ASR), hindering real-time deployment, this paper proposes SpecASR—a novel speculative decoding framework specifically designed for audio-conditioned generation. Its core contributions are: (1) identifying strong output alignment between small and large LLMs in ASR, enabling an adaptive draft-length adjustment and iterative draft reuse mechanism; and (2) introducing a two-stage sparse token tree generation algorithm that jointly respects audio semantic constraints and optimizes decoding efficiency. Evaluated on standard ASR benchmarks, SpecASR achieves zero accuracy degradation while accelerating end-to-end inference by 3.04×–3.79× over standard autoregressive decoding and by 1.25×–1.84× over conventional speculative decoding. The framework significantly reduces end-to-end ASR latency without compromising recognition fidelity.

Technology Category

Application Category

📝 Abstract
Large language model (LLM)-based automatic speech recognition (ASR) has recently attracted a lot of attention due to its high recognition accuracy and enhanced multi-dialect support. However, the high decoding latency of LLMs challenges the real-time ASR requirements. Although speculative decoding has been explored for better decoding efficiency, they usually ignore the key characteristics of the ASR task and achieve limited speedup. To further reduce the real-time ASR latency, in this paper, we propose a novel speculative decoding framework specialized for ASR, dubbed SpecASR. SpecASR is developed based on our core observation that ASR decoding is audio-conditioned, which results in high output alignment between small and large ASR models, even given output mismatches in intermediate decoding steps. Therefore, SpecASR features an adaptive draft sequence generation process that dynamically modifies the draft sequence length to maximize the token acceptance length. SpecASR further proposes a draft sequence recycling strategy that reuses the previously generated draft sequence to reduce the draft ASR model latency. Moreover, a two-pass sparse token tree generation algorithm is also proposed to balance the latency of draft and target ASR models. With extensive experimental results, we demonstrate SpecASR achieves 3.04x-3.79x and 1.25x-1.84x speedup over the baseline autoregressive decoding and speculative decoding, respectively, without any loss in recognition accuracy.
Problem

Research questions and friction points this paper is trying to address.

Reducing high decoding latency in LLM-based ASR systems
Improving speculative decoding efficiency for ASR tasks
Balancing draft and target model latency without accuracy loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive draft sequence generation for ASR
Draft sequence recycling to reduce latency
Two-pass sparse token tree generation algorithm
🔎 Similar Papers
No similar papers found.
Linye Wei
Linye Wei
Peking University
Efficient AI System & Accelerator
Shuzhang Zhong
Shuzhang Zhong
Peking University
Machine Learning System
S
Songqiang Xu
Institute for Artificial Intelligence, Peking University, Beijing, China; School of Software and Microelectronics, Peking University, Beijing, China
R
Runsheng Wang
School of Integrated Circuits, Peking University, Beijing, China; Institute of Electronic Design Automation, Peking University, Wuxi, China; Beijing Advanced Innovation Center for Integrated Circuits, Beijing, China
R
Ru Huang
School of Integrated Circuits, Peking University, Beijing, China; Institute of Electronic Design Automation, Peking University, Wuxi, China; Beijing Advanced Innovation Center for Integrated Circuits, Beijing, China
M
Meng Li
Institute for Artificial Intelligence, Peking University, Beijing, China; School of Integrated Circuits, Peking University, Beijing, China; Beijing Advanced Innovation Center for Integrated Circuits, Beijing, China