Distilling LLM Semantic Priors into Encoder-Only Multi-Talker ASR with Talker-Count Routing

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of multi-speaker automatic speech recognition (ASR) in high-overlap scenarios and the high computational cost and poor robustness of large language model (LLM)-based decoders. The authors propose a novel encoder-only multi-speaker ASR framework that, for the first time, injects semantic priors from an LLM into the encoder via knowledge distillation. A speaker-count prediction head is introduced to dynamically route decoding paths, enabling efficient transcription with variable numbers of speakers. By integrating serialized CTC, a speaker-ordering separator, and a speaker-oriented training (SOT) objective, the method achieves competitive performance with LLM-based decoders on two-speaker LibriMix mixtures and significantly improves accuracy in three-speaker settings, while substantially reducing the real-time factor (RTF).

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) provide strong semantic priors that can improve multi-talker automatic speech recognition (MT-ASR), but using an LLM as an autoregressive decoder is computationally expensive and remains fragile under heavy overlap. In this paper, we propose an encoder-only MT-ASR framework that adapts an LLM to multi-talker conditioning and distills its semantic guidance into the encoder during training, while retaining fast CTC-style decoding at inference. Our model employs a post-encoder separator with serialized CTC to produce talker-ordered transcripts, and leverages an adapted LLM-based SOT objective as a multi-talker-aware teacher signal to explicitly regularize mixed-speech representations. To further support variable numbers of talkers, we introduce a Talker-Count Head that predicts the talker count and dynamically selects the appropriate decoding branch. Experiments on LibriMix show that the proposed encoder-only model achieves comparable performance to LLM-based systems in the two-talker condition, while delivering significant improvements in the three-talker condition with significant small RTF.
Problem

Research questions and friction points this paper is trying to address.

multi-talker ASR
large language models
semantic priors
computational efficiency
speech overlap
Innovation

Methods, ideas, or system contributions that make the work stand out.

encoder-only ASR
LLM distillation
multi-talker speech separation
talker-count routing
semantic priors
🔎 Similar Papers
No similar papers found.