🤖 AI Summary
To address target-speaker automatic speech recognition (ASR) in multi-speaker dialogues, this paper proposes a lightweight and efficient Whisper adaptation method. Specifically, it introduces log-aware bias terms before the first Transformer layer, conditioning frame-level binary speaker labels (derived from speaker diarization outputs) to steer attention toward the target speaker. Crucially, only a single learnable bias parameter per speaker label is added—eliminating the need for complex speaker embedding spaces. This minimalist paradigm enables the first end-to-end, efficient Whisper adaptation for target-speaker ASR. On the NOTSOFAR-1 benchmark, our approach reduces the oracle-reference-constrained word error rate (ORC-WER) by 12.9 percentage points over cascaded baselines, significantly improving speaker-attributed transcription accuracy and practical system utility.
📝 Abstract
We propose a novel approach to enable the use of large, single-speaker ASR models, such as Whisper, for target speaker ASR. The key claim of this method is that it is much easier to model relative differences among speakers by learning to condition on frame-level diarization outputs than to learn the space of all speaker embeddings. We find that adding even a single bias term per diarization output type before the first transformer block can transform single-speaker ASR models into target-speaker ASR models. Our approach also supports speaker-attributed ASR by sequentially generating transcripts for each speaker in a diarization output. This simplified method outperforms baseline speech separation and diarization cascade by 12.9 % absolute ORC-WER on the NOTSOFAR-1 dataset.