SpeakerLM: End-to-End Versatile Speaker Diarization and Recognition with Multimodal Large Language Models

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing cascaded speaker diarization and recognition (SDR) systems suffer from error propagation, difficulty in modeling overlapping speech, and lack of joint optimization across tasks. This paper proposes the first end-to-end unified multimodal large language model framework that jointly models speaker diarization (SD) and automatic speech recognition (ASR), supporting flexible speaker enrollment—including zero-shot, few-shot, and fully enrolled scenarios. By integrating audio and text modalities and leveraging multi-stage training on real-world data, the framework enables cross-task joint optimization. It achieves significant improvements over state-of-the-art cascaded systems on multiple in-domain and cross-domain SDR benchmarks, demonstrating superior generalization, robustness to acoustic variability, and scalability with increasing data volume.

Technology Category

Application Category

📝 Abstract
The Speaker Diarization and Recognition (SDR) task aims to predict "who spoke when and what" within an audio clip, which is a crucial task in various real-world multi-speaker scenarios such as meeting transcription and dialogue systems. Existing SDR systems typically adopt a cascaded framework, combining multiple modules such as speaker diarization (SD) and automatic speech recognition (ASR). The cascaded systems suffer from several limitations, such as error propagation, difficulty in handling overlapping speech, and lack of joint optimization for exploring the synergy between SD and ASR tasks. To address these limitations, we introduce SpeakerLM, a unified multimodal large language model for SDR that jointly performs SD and ASR in an end-to-end manner. Moreover, to facilitate diverse real-world scenarios, we incorporate a flexible speaker registration mechanism into SpeakerLM, enabling SDR under different speaker registration settings. SpeakerLM is progressively developed with a multi-stage training strategy on large-scale real data. Extensive experiments show that SpeakerLM demonstrates strong data scaling capability and generalizability, outperforming state-of-the-art cascaded baselines on both in-domain and out-of-domain public SDR benchmarks. Furthermore, experimental results show that the proposed speaker registration mechanism effectively ensures robust SDR performance of SpeakerLM across diverse speaker registration conditions and varying numbers of registered speakers.
Problem

Research questions and friction points this paper is trying to address.

End-to-end joint speaker diarization and speech recognition
Handling overlapping speech and error propagation issues
Flexible speaker registration across diverse real-world scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end multimodal large language model
Flexible speaker registration mechanism
Multi-stage training on large-scale data
🔎 Similar Papers
No similar papers found.