SimulS2S-LLM: Unlocking Simultaneous Inference of Speech LLMs for Speech-to-Speech Translation

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of balancing low latency and high translation quality in real-time speech-to-speech translation (S2ST) using large language models (LLMs). To this end, it proposes an end-to-end synchronous speech LLM framework. The method introduces three key innovations: (1) a boundary-aware speech prompting mechanism that dynamically extracts salient semantic information from streaming audio segments; (2) test-time streaming strategies coupled with incremental beam search to significantly improve efficiency in discrete speech token prediction; and (3) joint optimization of ASR-BLEU and latency through a training-inference co-design. Evaluated on the CVSS dataset, the approach achieves a 3.0-point ASR-BLEU gain at comparable latency—marking the first successful end-to-end synchronous S2ST implementation with speech LMs—and delivers a substantial advance in the quality–latency trade-off.

Technology Category

Application Category

📝 Abstract
Simultaneous speech translation (SST) outputs translations in parallel with streaming speech input, balancing translation quality and latency. While large language models (LLMs) have been extended to handle the speech modality, streaming remains challenging as speech is prepended as a prompt for the entire generation process. To unlock LLM streaming capability, this paper proposes SimulS2S-LLM, which trains speech LLMs offline and employs a test-time policy to guide simultaneous inference. SimulS2S-LLM alleviates the mismatch between training and inference by extracting boundary-aware speech prompts that allows it to be better matched with text input data. SimulS2S-LLM achieves simultaneous speech-to-speech translation (Simul-S2ST) by predicting discrete output speech tokens and then synthesising output speech using a pre-trained vocoder. An incremental beam search is designed to expand the search space of speech token prediction without increasing latency. Experiments on the CVSS speech data show that SimulS2S-LLM offers a better translation quality-latency trade-off than existing methods that use the same training data, such as improving ASR-BLEU scores by 3 points at similar latency.
Problem

Research questions and friction points this paper is trying to address.

Enable simultaneous speech-to-speech translation with LLMs
Reduce training-inference mismatch in streaming speech models
Improve translation quality-latency trade-off in SST tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Offline training with test-time policy
Boundary-aware speech prompt extraction
Incremental beam search for token prediction
🔎 Similar Papers
No similar papers found.
Keqi Deng
Keqi Deng
University of Cambridge
Speech processingTranslationLarge language model
W
Wenxi Chen
Shanghai Jiao Tong University, Shanghai, China
X
Xie Chen
Shanghai Jiao Tong University, Shanghai, China
P
Phil Woodland
Department of Engineering, University of Cambridge, Trumpington St., Cambridge, UK.