๐ค AI Summary
Existing LLM-based multi-speaker automatic speech recognition (ASR) systems largely neglect prompt engineering, relying solely on simplistic task instructions and lacking structured prompts tailored for speaker separation and sequential transcription. Method: This paper presents the first systematic exploration of LLM prompting for multi-speaker ASR, introducing Serialized Output Promptingโa novel paradigm that explicitly models speaker order and content boundaries to guide LLMs in disentangling overlapping speech and generating temporally ordered transcriptions. Technically, we propose a serialized CTC layer and a three-stage collaborative training strategy integrating a speech encoder, a blind source separation module, and a greedy decoder for end-to-end prompt-driven recognition. Results: Evaluated on LibriMix, our approach achieves significant WER reductions in both two- and three-speaker scenarios, demonstrating that this prompting paradigm effectively unlocks the LLMโs capacity to model complex, overlapping speech.
๐ Abstract
Prompts are crucial for task definition and for improving the performance of large language models (LLM)-based systems. However, existing LLM-based multi-talker (MT) automatic speech recognition (ASR) systems either omit prompts or rely on simple task-definition prompts, with no prior work exploring the design of prompts to enhance performance. In this paper, we propose extracting serialized output prompts (SOP) and explicitly guiding the LLM using structured prompts to improve system performance (SOP-MT-ASR). A Separator and serialized Connectionist Temporal Classification (CTC) layers are inserted after the speech encoder to separate and extract MT content from the mixed speech encoding in a first-speaking-first-out manner. Subsequently, the SOP, which serves as a prompt for LLMs, is obtained by decoding the serialized CTC outputs using greedy search. To train the model effectively, we design a three-stage training strategy, consisting of serialized output training (SOT) fine-tuning, serialized speech information extraction, and SOP-based adaptation. Experimental results on the LibriMix dataset show that, although the LLM-based SOT model performs well in the two-talker scenario, it fails to fully leverage LLMs under more complex conditions, such as the three-talker scenario. The proposed SOP approach significantly improved performance under both two- and three-talker conditions.