🤖 AI Summary
This work addresses end-to-end sign language video-to-text translation without gloss annotations. Methodologically, it introduces a novel paradigm that eliminates reliance on gloss-based intermediate representations and avoids fine-tuning of visual encoders. For the first time, spatial configuration and motion dynamics features—extracted directly from off-the-shelf vision encoders (ViT/I3D)—are injected into a large language model (LLM). This integration is facilitated by vision–text contrastive pre-alignment and prompt engineering to achieve cross-modal semantic alignment. Crucially, the approach decouples visual representation learning from linguistic generation, thereby circumventing the gloss bottleneck and eliminating the need for domain-specific adaptation of visual models. Evaluated on PHOENIX14T and How2Sign, the method achieves state-of-the-art performance, with significant improvements in translation accuracy and cross-scenario generalization.
📝 Abstract
Gloss-free Sign Language Translation (SLT) converts sign videos directly into spoken language sentences without relying on glosses. Recently, Large Language Models (LLMs) have shown remarkable translation performance in gloss-free methods by harnessing their powerful natural language generation capabilities. However, these methods often rely on domain-specific fine-tuning of visual encoders to achieve optimal results. By contrast, this paper emphasizes the importance of capturing the spatial configurations and motion dynamics inherent in sign language. With this in mind, we introduce Spatial and Motion-based Sign Language Translation (SpaMo), a novel LLM-based SLT framework. The core idea of SpaMo is simple yet effective. We first extract spatial and motion features using off-the-shelf visual encoders and then input these features into an LLM with a language prompt. Additionally, we employ a visual-text alignment process as a warm-up before the SLT supervision. Our experiments demonstrate that SpaMo achieves state-of-the-art performance on two popular datasets, PHOENIX14T and How2Sign.