🤖 AI Summary
Existing cascaded speaker diarization and recognition (SDR) systems suffer from error propagation, difficulty in modeling overlapping speech, and lack of joint optimization across tasks. This paper proposes the first end-to-end unified multimodal large language model framework that jointly models speaker diarization (SD) and automatic speech recognition (ASR), supporting flexible speaker enrollment—including zero-shot, few-shot, and fully enrolled scenarios. By integrating audio and text modalities and leveraging multi-stage training on real-world data, the framework enables cross-task joint optimization. It achieves significant improvements over state-of-the-art cascaded systems on multiple in-domain and cross-domain SDR benchmarks, demonstrating superior generalization, robustness to acoustic variability, and scalability with increasing data volume.
📝 Abstract
The Speaker Diarization and Recognition (SDR) task aims to predict "who spoke when and what" within an audio clip, which is a crucial task in various real-world multi-speaker scenarios such as meeting transcription and dialogue systems. Existing SDR systems typically adopt a cascaded framework, combining multiple modules such as speaker diarization (SD) and automatic speech recognition (ASR). The cascaded systems suffer from several limitations, such as error propagation, difficulty in handling overlapping speech, and lack of joint optimization for exploring the synergy between SD and ASR tasks. To address these limitations, we introduce SpeakerLM, a unified multimodal large language model for SDR that jointly performs SD and ASR in an end-to-end manner. Moreover, to facilitate diverse real-world scenarios, we incorporate a flexible speaker registration mechanism into SpeakerLM, enabling SDR under different speaker registration settings. SpeakerLM is progressively developed with a multi-stage training strategy on large-scale real data. Extensive experiments show that SpeakerLM demonstrates strong data scaling capability and generalizability, outperforming state-of-the-art cascaded baselines on both in-domain and out-of-domain public SDR benchmarks. Furthermore, experimental results show that the proposed speaker registration mechanism effectively ensures robust SDR performance of SpeakerLM across diverse speaker registration conditions and varying numbers of registered speakers.