🤖 AI Summary
This work addresses the challenges of scarce annotated data and low temporal alignment accuracy in sign language video–caption alignment. To tackle these issues, we propose a temporal alignment framework integrating linguistic priors with self-supervised learning. Methodologically: (1) We preprocess captions using British Sign Language (BSL) grammar constraints to enhance linguistic structural coherence; (2) we introduce a selective alignment loss that applies supervision only during actual sign production intervals; and (3) we replace noisy audio-heuristic alignment labels with high-confidence pseudo-labels generated self-supervisively. Experiments demonstrate substantial improvements over prior methods in frame-level accuracy and F1 score, establishing new state-of-the-art performance on sign language video–text temporal alignment. Our approach provides a scalable, low-resource paradigm for sign language understanding, particularly beneficial where manual annotation is prohibitively expensive or unavailable.
📝 Abstract
The objective of this work is to align asynchronous subtitles in sign language videos with limited labelled data. To achieve this goal, we propose a novel framework with the following contributions: (1) we leverage fundamental grammatical rules of British Sign Language (BSL) to pre-process the input subtitles, (2) we design a selective alignment loss to optimise the model for predicting the temporal location of signs only when the queried sign actually occurs in a scene, and (3) we conduct self-training with refined pseudo-labels which are more accurate than the heuristic audio-aligned labels. From this, our model not only better understands the correlation between the text and the signs, but also holds potential for application in the translation of sign languages, particularly in scenarios where manual labelling of large-scale sign data is impractical or challenging. Extensive experimental results demonstrate that our approach achieves state-of-the-art results, surpassing previous baselines by substantial margins in terms of both frame-level accuracy and F1-score. This highlights the effectiveness and practicality of our framework in advancing the field of sign language video alignment and translation.