EffectiveASR: A Single-Step Non-Autoregressive Mandarin Speech Recognition Architecture with High Accuracy and Inference Speed

๐Ÿ“… 2024-06-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the trade-off between accuracy and inference speed in non-autoregressive (NAR) speech recognition, this paper proposes EffectiveASRโ€”a Chinese-oriented single-step NAR model. The core innovation is the Index Mapping Vector (IMV) alignment mechanism, which jointly models an alignment generator and an alignment predictor, and integrates end-to-end cross-entropy loss with alignment loss in a unified optimization framework. Unlike conventional iterative decoding or teacher-forced alignment strategies, EffectiveASR enables truly single-step, differentiable, and end-to-end training. On AISHELL-1, it achieves 4.26%/4.62% character error rate (CER), matching the accuracy of autoregressive Conformer while accelerating inference by approximately 30ร—. On AISHELL-2, it attains state-of-the-art performance, demonstrating both the effectiveness and generalizability of the IMV alignment mechanism for Chinese NAR speech recognition.

Technology Category

Application Category

๐Ÿ“ Abstract
Non-autoregressive (NAR) automatic speech recognition (ASR) models predict tokens independently and simultaneously, bringing high inference speed. However, there is still a gap in the accuracy of the NAR models compared to the autoregressive (AR) models. In this paper, we propose a single-step NAR ASR architecture with high accuracy and inference speed, called EffectiveASR. It uses an Index Mapping Vector (IMV) based alignment generator to generate alignments during training, and an alignment predictor to learn the alignments for inference. It can be trained end-to-end (E2E) with cross-entropy loss combined with alignment loss. The proposed EffectiveASR achieves competitive results on the AISHELL-1 and AISHELL-2 Mandarin benchmarks compared to the leading models. Specifically, it achieves character error rates (CER) of 4.26%/4.62% on the AISHELL-1 dev/test dataset, which outperforms the AR Conformer with about 30x inference speedup.
Problem

Research questions and friction points this paper is trying to address.

Non-autoregressive models
Speech recognition
Accuracy-speed trade-off
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-autoregressive ASR
Alignment Optimization
End-to-end Training
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Ziyang Zhuang
Ping An Technology
C
Chenfeng Miao
Ping An Technology
K
Kun Zou
Ping An Technology
M
Ming Fang
Ping An Technology
T
Tao Wei
Ping An Technology
Z
Zijian Li
Georgia Institute of Technology
Ning Cheng
Ning Cheng
TeraHop
W
Wei Hu
Ping An Technology
Shaojun Wang
Shaojun Wang
Soochow University, TU/e, University of Strasbourg
NanophotonicsLight-matter interactionsNanofabrication
J
Jing Xiao
Ping An Technology