🤖 AI Summary
In end-to-end multi-speaker automatic speech recognition (ASR), overlapping speech recognition performance is limited by the decoder’s insufficient implicit speaker separation capability. To address this, we propose Speaker-Conditioned Sequential Output Training (SC-SOT), the first method to jointly inject registration-free, end-to-end speaker diarization information—namely, speaker embeddings and activity masks—into the ASR decoder. SC-SOT employs speaker-conditioned attention and target-speaker suppression to enable fine-grained, explicit speaker-aware modeling during decoding. Evaluated on overlapping speech benchmarks, SC-SOT achieves substantial improvements in recognition accuracy, yielding a 12.7% relative reduction in word error rate (WER) under high overlap conditions. These results demonstrate that explicit speaker modeling effectively enhances decoder discriminability, overcoming the performance ceiling imposed by conventional implicit separation approaches.
📝 Abstract
We propose Speaker-Conditioned Serialized Output Training (SC-SOT), an enhanced SOT-based training for E2E multi-talker ASR. We first probe how SOT handles overlapped speech, and we found the decoder performs implicit speaker separation. We hypothesize this implicit separation is often insufficient due to ambiguous acoustic cues in overlapping regions. To address this, SC-SOT explicitly conditions the decoder on speaker information, providing detailed information about"who spoke when". Specifically, we enhance the decoder by incorporating: (1) speaker embeddings, which allow the model to focus on the acoustic characteristics of the target speaker, and (2) speaker activity information, which guides the model to suppress non-target speakers. The speaker embeddings are derived from a jointly trained E2E speaker diarization model, mitigating the need for speaker enrollment. Experimental results demonstrate the effectiveness of our conditioning approach on overlapped speech.