π€ AI Summary
To address the performance bottleneck in non-autoregressive (NAR) automatic speech recognition (ASR) caused by train-inference distribution mismatch, this paper proposes Draxβthe first NAR ASR framework based on discrete flow matching. Drax constructs an audio-conditioned probability flow path that explicitly models intermediate erroneous token trajectories during inference, thereby mitigating distributional shift between training and inference. Theoretically, it establishes a connection between generalization error and cumulative velocity error, providing principled guidance for model design. Drax enables fully parallel decoding and achieves recognition accuracy competitive with state-of-the-art autoregressive models on benchmarks including LibriSpeech, while substantially improving decoding efficiency. Extensive experiments validate the effectiveness and scalability of discrete flow matching for ASR tasks.
π Abstract
Diffusion and flow-based non-autoregressive (NAR) models have shown strong promise in large language modeling, however, their potential for automatic speech recognition (ASR) remains largely unexplored. We propose Drax, a discrete flow matching framework for ASR that enables efficient parallel decoding. To better align training with inference, we construct an audio-conditioned probability path that guides the model through trajectories resembling likely intermediate inference errors, rather than direct random noise to target transitions. Our theoretical analysis links the generalization gap to divergences between training and inference occupancies, controlled by cumulative velocity errors, thereby motivating our design choice. Empirical evaluation demonstrates that our approach attains recognition accuracy on par with state-of-the-art speech models while offering improved accuracy-efficiency trade-offs, highlighting discrete flow matching as a promising direction for advancing NAR ASR.