🤖 AI Summary
This work addresses the performance degradation of multi-speaker automatic speech recognition (ASR) in high-overlap scenarios and the high computational cost and poor robustness of large language model (LLM)-based decoders. The authors propose a novel encoder-only multi-speaker ASR framework that, for the first time, injects semantic priors from an LLM into the encoder via knowledge distillation. A speaker-count prediction head is introduced to dynamically route decoding paths, enabling efficient transcription with variable numbers of speakers. By integrating serialized CTC, a speaker-ordering separator, and a speaker-oriented training (SOT) objective, the method achieves competitive performance with LLM-based decoders on two-speaker LibriMix mixtures and significantly improves accuracy in three-speaker settings, while substantially reducing the real-time factor (RTF).
📝 Abstract
Large language models (LLMs) provide strong semantic priors that can improve multi-talker automatic speech recognition (MT-ASR), but using an LLM as an autoregressive decoder is computationally expensive and remains fragile under heavy overlap. In this paper, we propose an encoder-only MT-ASR framework that adapts an LLM to multi-talker conditioning and distills its semantic guidance into the encoder during training, while retaining fast CTC-style decoding at inference. Our model employs a post-encoder separator with serialized CTC to produce talker-ordered transcripts, and leverages an adapted LLM-based SOT objective as a multi-talker-aware teacher signal to explicitly regularize mixed-speech representations. To further support variable numbers of talkers, we introduce a Talker-Count Head that predicts the talker count and dynamically selects the appropriate decoding branch. Experiments on LibriMix show that the proposed encoder-only model achieves comparable performance to LLM-based systems in the two-talker condition, while delivering significant improvements in the three-talker condition with significant small RTF.