Improving Practical Aspects of End-to-End Multi-Talker Speech Recognition for Online and Offline Scenarios

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of recognizing highly overlapping multi-speaker speech in both online (low-latency) and offline (high-accuracy) automatic speech recognition (ASR) scenarios, this paper proposes an end-to-end unified framework. Methodologically, it introduces the first deep integration of a single-channel continuous speech separation (CSS) frontend with ASR; designs a dual-model collaborative architecture—comprising a Conformer Transducer and a Seq2Seq model—alongside segment-wise serialized output training (segSOT) to jointly enhance robustness to speaker overlap and transcription readability. The approach achieves streaming latency under 300 ms while significantly reducing offline word error rate (WER). Experimental results demonstrate the feasibility and state-of-the-art performance of end-to-end multi-speaker ASR for real-world applications such as live captioning and meeting summarization.

Technology Category

Application Category

📝 Abstract
We extend the frameworks of Serialized Output Training (SOT) to address practical needs of both streaming and offline automatic speech recognition (ASR) applications. Our approach focuses on balancing latency and accuracy, catering to real-time captioning and summarization requirements. We propose several key improvements: (1) Leveraging Continuous Speech Separation (CSS) single-channel front-end with end-to-end (E2E) systems for highly overlapping scenarios, challenging the conventional wisdom of E2E versus cascaded setups. The CSS framework improves the accuracy of the ASR system by separating overlapped speech from multiple speakers. (2) Implementing dual models -- Conformer Transducer for streaming and Sequence-to-Sequence for offline -- or alternatively, a two-pass model based on cascaded encoders. (3) Exploring segment-based SOT (segSOT) which is better suited for offline scenarios while also enhancing readability of multi-talker transcriptions.
Problem

Research questions and friction points this paper is trying to address.

Balancing latency and accuracy in streaming and offline ASR
Improving accuracy in overlapping speech with CSS and E2E systems
Enhancing multi-talker transcription readability with segment-based SOT
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous Speech Separation front-end for overlapping speech
Dual models for streaming and offline ASR
Segment-based SOT for better offline transcription
🔎 Similar Papers
No similar papers found.