🤖 AI Summary
This study systematically evaluates the practical efficacy of speech large language models (SpeechLLMs) on speech-to-text translation (ST), assessing whether they outperform conventional ASR+MT cascade systems. We introduce the first comprehensive benchmark—covering 13 language pairs, 9 noise types, and diverse contextual conditions—and conduct a head-to-head comparison across 16 architectures, including end-to-end SpeechLLMs, cascaded ASR+MT, and hybrid approaches. Results show that current end-to-end SpeechLLMs do not yet surpass high-quality cascade systems; however, LLM integration—either embedded within the speech encoder or employed as a post-processing module—is critical for translation quality improvement. Pure speech foundation models (SFMs) exhibit significantly constrained performance. Our core contributions are: (i) the construction of the first multidimensional ST evaluation benchmark, and (ii) the first empirical demonstration of the indispensable, structurally embedded role of LLMs in the ST pipeline.
📝 Abstract
As Large Language Models (LLMs) expand beyond text, integrating speech as a native modality has given rise to SpeechLLMs, which aim to translate spoken language directly, thereby bypassing traditional transcription-based pipelines. Whether this integration improves speech-to-text translation quality over established cascaded architectures, however, remains an open question. We present Hearing to Translate, the first comprehensive test suite rigorously benchmarking 5 state-of-the-art SpeechLLMs against 16 strong direct and cascade systems that couple leading speech foundation models (SFM), with multilingual LLMs. Our analysis spans 16 benchmarks, 13 language pairs, and 9 challenging conditions, including disfluent, noisy, and long-form speech. Across this extensive evaluation, we find that cascaded systems remain the most reliable overall, while current SpeechLLMs only match cascades in selected settings and SFMs lag behind both, highlighting that integrating an LLM, either within the model or in a pipeline, is essential for high-quality speech translation.