🤖 AI Summary
To address the high development, training, and deployment costs arising from separate streaming and non-streaming ASR models, this paper proposes a unified end-to-end ASR framework. We introduce a dynamic right-context mechanism into the Zipformer architecture for the first time, coupled with chunked attention masking to flexibly and controllably incorporate future-frame information during training. This design fully leverages Zipformer’s multi-scale modeling capability, enabling a single model to support both low-latency streaming recognition and high-accuracy non-streaming recognition. Trained on LibriSpeech and large-scale internal conversational data, production-level evaluation shows a 7.9% relative word error rate reduction, streaming performance approaching that of the non-streaming baseline, and fine-grained controllability over the latency–accuracy trade-off. The framework has been successfully deployed and validated across multiple real-world application domains.
📝 Abstract
There has been increasing interest in unifying streaming and non-streaming automatic speech recognition (ASR) models to reduce development, training, and deployment costs. We present a unified framework that trains a single end-to-end ASR model for both streaming and non-streaming applications, leveraging future context information. We propose to use dynamic right-context through the chunked attention masking in the training of zipformer-based ASR models. We demonstrate that using right-context is more effective in zipformer models compared to other conformer models due to its multi-scale nature. We analyze the effect of varying the number of right-context frames on accuracy and latency of the streaming ASR models. We use Librispeech and large in-house conversational datasets to train different versions of streaming and non-streaming models and evaluate them in a production grade server-client setup across diverse testsets of different domains. The proposed strategy reduces word error by relative 7.9% with a small degradation in user-perceived latency. By adding more right-context frames, we are able to achieve streaming performance close to that of non-streaming models. Our approach also allows flexible control of the latency-accuracy tradeoff according to customers requirements.