🤖 AI Summary
This work proposes a semantics-enhanced streaming automatic speech recognition (ASR) approach to address performance degradation caused by the absence of future contextual information in low-latency scenarios. By integrating a knowledge distillation–based sentence-embedding language model with a context-aware semantic module, the method extracts high-level semantic cues from historical frames and injects them into a neural Transformer architecture. This integration effectively mitigates the context deficiency inherent in chunk-based streaming processing. Experimental results on standard benchmarks demonstrate a significant reduction in word error rate, validating the efficacy and novelty of leveraging semantic guidance to enhance streaming transcription quality.
📝 Abstract
Many Automatic Speech Recognition (ASR) applications require streaming processing of the audio data. In streaming mode, ASR systems need to start transcribing the input stream before it is complete, i.e., the systems have to process a stream of inputs with a limited (or no) future context. Compared to offline mode, this reduction of the future context degrades the performance of Streaming-ASR systems, especially while working with low-latency constraint. In this work, we present SENS-ASR, an approach to enhance the transcription quality of Streaming-ASR by reinforcing the acoustic information with semantic information. This semantic information is extracted from the available past frame-embeddings by a context module. This module is trained using knowledge distillation from a sentence embedding Language Model fine-tuned on the training dataset transcriptions. Experiments on standard datasets show that SENS-ASR significantly improves the Word Error Rate on small-chunk streaming scenarios.