🤖 AI Summary
Joint punctuation and capitalization normalization in end-to-end ASR is severely hindered by the scarcity of punctuated text annotations. To address this, we propose two novel paradigms: (1) language model–assisted pseudo-punctuation generation to alleviate annotation scarcity, and (2) a unified single-decoder conditional output mechanism that jointly models punctuation, capitalization, and word units. Our system builds upon an enhanced end-to-end ASR architecture incorporating external language model distillation and decoder-level conditional control. With only 5% punctuated training data, it achieves a 42% relative reduction in Punctuation-Capitalization Word Error Rate (PC-WER) over Whisper-base, while incurring only a +2.42% absolute PC-WER increase. This approach significantly improves joint punctuation–normalization performance under low-resource conditions and provides an efficient, annotation-light solution for production-ready ASR systems.
📝 Abstract
Joint punctuated and normalized automatic speech recognition (ASR), that outputs transcripts with and without punctuation and casing, remains challenging due to the lack of paired speech and punctuated text data in most ASR corpora. We propose two approaches to train an end-to-end joint punctuated and normalized ASR system using limited punctuated data. The first approach uses a language model to convert normalized training transcripts into punctuated transcripts. This achieves a better performance on out-of-domain test data, with up to 17% relative Punctuation-Case-aware Word Error Rate (PC-WER) reduction. The second approach uses a single decoder conditioned on the type of output. This yields a 42% relative PC-WER reduction compared to Whisper-base and a 4% relative (normalized) WER reduction compared to the normalized output of a punctuated-only model. Additionally, our proposed modeldemonstrates the feasibility of a joint ASR system using as little as 5% punctuated training data with a moderate (2.42% absolute) PC-WER increase.