🤖 AI Summary
This work addresses the limitations of existing forced alignment methods, which exhibit strong language dependency and suffer from cumulative temporal drift in long-form speech. To overcome these issues, the authors propose a non-autoregressive alignment paradigm based on slot filling, reframing the alignment task as discrete timestamp index prediction. By leveraging a speech large language model equipped with causal attention masking and a dynamic slot insertion mechanism, the method enables efficient and flexible alignment at arbitrary positions. This approach inherently supports multilingual and cross-lingual scenarios as well as long utterances, effectively mitigating hallucination while significantly improving inference efficiency. Experimental results demonstrate a 69%–78% reduction in cumulative mean offset compared to current methods under multilingual and long-speech settings.
📝 Abstract
Forced alignment (FA) predicts start and end timestamps for words or characters in speech, but existing methods are language-specific and prone to cumulative temporal shifts. The multilingual speech understanding and long-sequence processing abilities of speech large language models (SLLMs) make them promising for FA in multilingual, crosslingual, and long-form speech settings. However, directly applying the next-token prediction paradigm of SLLMs to FA results in hallucinations and slow inference. To bridge the gap, we propose LLM-ForcedAligner, reformulating FA as a slot-filling paradigm: timestamps are treated as discrete indices, and special timestamp tokens are inserted as slots into the transcript. Conditioned on the speech embeddings and the transcript with slots, the SLLM directly predicts the time indices at slots. During training, causal attention masking with non-shifted input and label sequences allows each slot to predict its own timestamp index based on itself and preceding context, with loss computed only at slot positions. Dynamic slot insertion enables FA at arbitrary positions. Moreover, non-autoregressive inference is supported, avoiding hallucinations and improving speed. Experiments across multilingual, crosslingual, and long-form speech scenarios show that LLM-ForcedAligner achieves a 69%~78% relative reduction in accumulated averaging shift compared with prior methods. Checkpoint and inference code are available at https://github.com/QwenLM/Qwen3-ASR.