Serialized Output Prompting for Large Language Model-based Multi-Talker Speech Recognition

๐Ÿ“… 2025-08-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing LLM-based multi-speaker automatic speech recognition (ASR) systems largely neglect prompt engineering, relying solely on simplistic task instructions and lacking structured prompts tailored for speaker separation and sequential transcription. Method: This paper presents the first systematic exploration of LLM prompting for multi-speaker ASR, introducing Serialized Output Promptingโ€”a novel paradigm that explicitly models speaker order and content boundaries to guide LLMs in disentangling overlapping speech and generating temporally ordered transcriptions. Technically, we propose a serialized CTC layer and a three-stage collaborative training strategy integrating a speech encoder, a blind source separation module, and a greedy decoder for end-to-end prompt-driven recognition. Results: Evaluated on LibriMix, our approach achieves significant WER reductions in both two- and three-speaker scenarios, demonstrating that this prompting paradigm effectively unlocks the LLMโ€™s capacity to model complex, overlapping speech.

Technology Category

Application Category

๐Ÿ“ Abstract
Prompts are crucial for task definition and for improving the performance of large language models (LLM)-based systems. However, existing LLM-based multi-talker (MT) automatic speech recognition (ASR) systems either omit prompts or rely on simple task-definition prompts, with no prior work exploring the design of prompts to enhance performance. In this paper, we propose extracting serialized output prompts (SOP) and explicitly guiding the LLM using structured prompts to improve system performance (SOP-MT-ASR). A Separator and serialized Connectionist Temporal Classification (CTC) layers are inserted after the speech encoder to separate and extract MT content from the mixed speech encoding in a first-speaking-first-out manner. Subsequently, the SOP, which serves as a prompt for LLMs, is obtained by decoding the serialized CTC outputs using greedy search. To train the model effectively, we design a three-stage training strategy, consisting of serialized output training (SOT) fine-tuning, serialized speech information extraction, and SOP-based adaptation. Experimental results on the LibriMix dataset show that, although the LLM-based SOT model performs well in the two-talker scenario, it fails to fully leverage LLMs under more complex conditions, such as the three-talker scenario. The proposed SOP approach significantly improved performance under both two- and three-talker conditions.
Problem

Research questions and friction points this paper is trying to address.

Designing effective prompts for multi-talker ASR systems
Improving LLM performance in complex multi-speaker scenarios
Extracting serialized outputs to guide speech recognition models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Serialized Output Prompting for multi-talker ASR
Separator and CTC layers extract speech content
Three-stage training strategy enhances model performance
๐Ÿ”Ž Similar Papers
2024-09-13IEEE International Conference on Acoustics, Speech, and Signal ProcessingCitations: 1