๐ค AI Summary
Existing low-latency simultaneous interpretation (SI) research lacks a unified, real-world-oriented evaluation framework, hindering fair comparison across audio streaming segmentation, component-level latency, translation quality, and system types (cascade vs. end-to-end; revision-enabled vs. fixed-output).
Method: We propose the first end-to-end, full-pipeline evaluation framework for low-latency SI, integrating streaming speech processing, adaptive segmentation strategies, latency-aware BLEU/TER metrics, and modular performance profiling.
Contribution/Results: Our framework enables automatic quantification of the latencyโquality trade-off and provides an interactive web interface. Experiments demonstrate that revision mechanisms significantly improve translation quality, and reveal the practical advantages of end-to-end systems under strict low-latency constraints. This work establishes a standardized benchmark for SI system development and evaluation.
๐ Abstract
The challenge of low-latency speech translation has recently draw significant interest in the research community as shown by several publications and shared tasks. Therefore, it is essential to evaluate these different approaches in realistic scenarios. However, currently only specific aspects of the systems are evaluated and often it is not possible to compare different approaches. In this work, we propose the first framework to perform and evaluate the various aspects of low-latency speech translation under realistic conditions. The evaluation is carried out in an end-to-end fashion. This includes the segmentation of the audio as well as the run-time of the different components. Secondly, we compare different approaches to low-latency speech translation using this framework. We evaluate models with the option to revise the output as well as methods with fixed output. Furthermore, we directly compare state-of-the-art cascaded as well as end-to-end systems. Finally, the framework allows to automatically evaluate the translation quality as well as latency and also provides a web interface to show the low-latency model outputs to the user.