Cross-Modal Consistency Learning for Sign Language Recognition

📅 2025-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges in isolated sign language recognition (ISLR)—redundant and noisy information in RGB videos and semantic sparsity in pose sequences—this paper proposes a self-supervised cross-modal consistency learning framework. Our method jointly models RGB and graph-structured pose representations, introducing two novel strategies: Motion-Preserving Masking (MPM) and Semantic Positive Mining (SPM), enabling fine-grained cross-modal semantic alignment and robust feature learning. We further incorporate action-aware data augmentation and feature-space alignment to enhance temporal dynamic modeling. Evaluated on four mainstream ISLR benchmarks, our approach achieves state-of-the-art performance with significant improvements in recognition accuracy. The source code will be publicly released.

Technology Category

Application Category

📝 Abstract
Pre-training has been proven to be effective in boosting the performance of Isolated Sign Language Recognition (ISLR). Existing pre-training methods solely focus on the compact pose data, which eliminate background perturbation but inevitably suffer from insufficient semantic cues compared to raw RGB videos. Nevertheless, direct representation learning only from RGB videos remains challenging due to the presence of sign-independent visual features. To address this dilemma, we propose a Cross-modal Consistency Learning framework (CCL-SLR), which leverages the cross-modal consistency from both RGB and pose modalities based on self-supervised pre-training. First, CCL-SLR employs contrastive learning for instance discrimination within and across modalities. Through the single-modal and cross-modal contrastive learning, CCL-SLR gradually aligns the feature spaces of RGB and pose modalities, thereby extracting consistent sign representations. Second, we further introduce Motion-Preserving Masking (MPM) and Semantic Positive Mining (SPM) techniques to improve cross-modal consistency from the perspective of data augmentation and sample similarity, respectively. Extensive experiments on four ISLR benchmarks show that CCL-SLR achieves impressive performance, demonstrating its effectiveness. The code will be released to the public.
Problem

Research questions and friction points this paper is trying to address.

Improve isolated sign language recognition using cross-modal consistency learning.
Address insufficient semantic cues in pose data and sign-independent features in RGB videos.
Enhance feature alignment between RGB and pose modalities through self-supervised pre-training.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modal consistency learning for sign language recognition
Contrastive learning aligns RGB and pose feature spaces
Motion-Preserving Masking and Semantic Positive Mining techniques
K
Kepeng Wu
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
Z
Zecheng Li
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
W
Weichao Zhao
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
Hezhen Hu
Hezhen Hu
University of Texas at Austin
Sign Language RecognitionSign Language TranslationVideo Understanding
Wengang Zhou
Wengang Zhou
Professor, EEIS Department, University of Science and Technology of China
Multimedia RetrievalComputer VisionComputer Game
Houqiang Li
Houqiang Li
Professor, Department of Electric Engineering and Information Science, University of Science and
Multimedia SearchImage/Video AnalysisImage/Video Coding