Audio-Conditioned Diffusion LLMs for ASR and Deliberation Processing

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses insufficient post-processing and decoding accuracy in automatic speech recognition (ASR). We propose LLaDA, an audio-conditioned diffusion large language model, for ASR optimization—serving both as an external deliberation module for Whisper-LLaMA transcriptions and as an end-to-end standalone decoder. Key innovations include audio feature embedding, bidirectional attention, denoising-based modeling, and three novel strategies: random masking, low-confidence masking, and semi-autoregressive decoding—collectively enhancing robust correction of erroneous segments. On the LibriSpeech test-clean/test-other sets, LLaDA achieves WERs of 2.25% and 4.94%, respectively—a 12.3% relative reduction over the Whisper-LLaMA baseline—while most configurations exhibit faster inference. This study provides the first empirical validation of diffusion-based language models for ASR post-processing and joint audio-text decoding, demonstrating superior effectiveness, accuracy, and efficiency.

Technology Category

Application Category

📝 Abstract
Diffusion-based large language models (DLLMs) have recently attracted growing interest as an alternative to autoregressive decoders. In this work, we present an empirical study on using the diffusion-based large language model LLaDA for automatic speech recognition (ASR). We first investigate its use as an external deliberation-based processing module for Whisper-LLaMA transcripts. By leveraging the bidirectional attention and denoising capabilities of LLaDA, we explore random masking, low-confidence masking, and semi-autoregressive strategies, showing that Whisper-LLaDA substantially reduces WER compared with the baseline. On LibriSpeech, the best cascade system achieves 2.25%/4.94% WER on test-clean/test-other, representing a 12.3% relative improvement over the Whisper-LLaMA baseline on the test-other split. In contrast, a plain-text LLaDA without acoustic features fails to improve accuracy, highlighting the importance of audio-conditioned embeddings. We further evaluate Whisper-LLaDA as a standalone decoder for ASR with diffusion-based and semi-autoregressive decoding. Most experimental configurations achieve faster inference than the Whisper-LLaMA baseline, although recognition accuracy is slightly lower. These findings offer an empirical view of diffusion-based LLMs for ASR and point to promising directions for improvements.
Problem

Research questions and friction points this paper is trying to address.

Improving automatic speech recognition accuracy using diffusion-based language models
Exploring audio-conditioned embeddings for better speech processing performance
Comparing diffusion-based decoding strategies with autoregressive methods for ASR
Innovation

Methods, ideas, or system contributions that make the work stand out.

Audio-conditioned diffusion LLMs for ASR
External deliberation module with bidirectional attention
Semi-autoregressive decoding for faster inference
🔎 Similar Papers
No similar papers found.
M
Mengqi Wang
University of Illinois at Urbana-Champaign
Z
Zhan Liu
Tsinghua University
Zengrui Jin
Zengrui Jin
Tsinghua University
Speech Recognition
Guangzhi Sun
Guangzhi Sun
University of Cambridge
Speech and language technologyconversational AI
C
Chao Zhang
Tsinghua University
P
Philip C. Woodland
University of Cambridge