Sigma: Semantically Informative Pre-training for Skeleton-based Sign Language Understanding

๐Ÿ“… 2025-09-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses three key challenges in skeleton-based sign language understanding: weak semantic representation, imbalance between local and global information, and difficulty in cross-modal alignment. To this end, we propose a semantic-aware skeleton pre-training framework. Our method introduces: (1) a sign-aware early-fusion mechanism that explicitly models correspondences between skeletal motion dynamics and linguistic meaning; (2) a multi-level alignment strategy jointly optimizing fine-grained action features and global semantic context; and (3) a unified multi-task pre-training paradigm integrating contrastive learning, text matching, and language modeling to achieve skeletonโ€“text cross-modal semantic alignment. Evaluated on isolated word recognition, continuous sign language recognition, and vocabulary-free translation tasks, our approach achieves state-of-the-art performance. It is the first work to systematically demonstrate the effectiveness and robustness of end-to-end sign language understanding using skeleton modality alone.

Technology Category

Application Category

๐Ÿ“ Abstract
Pre-training has proven effective for learning transferable features in sign language understanding (SLU) tasks. Recently, skeleton-based methods have gained increasing attention because they can robustly handle variations in subjects and backgrounds without being affected by appearance or environmental factors. Current SLU methods continue to face three key limitations: 1) weak semantic grounding, as models often capture low-level motion patterns from skeletal data but struggle to relate them to linguistic meaning; 2) imbalance between local details and global context, with models either focusing too narrowly on fine-grained cues or overlooking them for broader context; and 3) inefficient cross-modal learning, as constructing semantically aligned representations across modalities remains difficult. To address these, we propose Sigma, a unified skeleton-based SLU framework featuring: 1) a sign-aware early fusion mechanism that facilitates deep interaction between visual and textual modalities, enriching visual features with linguistic context; 2) a hierarchical alignment learning strategy that jointly maximises agreements across different levels of paired features from different modalities, effectively capturing both fine-grained details and high-level semantic relationships; and 3) a unified pre-training framework that combines contrastive learning, text matching and language modelling to promote semantic consistency and generalisation. Sigma achieves new state-of-the-art results on isolated sign language recognition, continuous sign language recognition, and gloss-free sign language translation on multiple benchmarks spanning different sign and spoken languages, demonstrating the impact of semantically informative pre-training and the effectiveness of skeletal data as a stand-alone solution for SLU.
Problem

Research questions and friction points this paper is trying to address.

Models struggle to relate skeletal motion patterns to linguistic meaning
Imbalance exists between capturing local details and global context
Inefficient cross-modal learning hinders semantically aligned representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sign-aware early fusion for visual-textual modality interaction
Hierarchical alignment learning across multi-level feature pairs
Unified pre-training combining contrastive and language modeling
๐Ÿ”Ž Similar Papers
No similar papers found.