Sign Language Translation using Frame and Event Stream: Benchmark Dataset and Algorithms

📅 2025-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RGB-frame-based sign language translation (SLT) methods suffer from sensitivity to fixed frame rates, illumination variations, and motion blur. To address these limitations, this work introduces event cameras—the first such application in SLT—alongside VECSL, the first large-scale open-source RGB-Event sign language benchmark comprising 15,676 samples and 2,568 Chinese characters. We propose M²-SLT, a multi-granularity fusion framework that jointly models micro- and macro-gestures while enforcing cross-modal alignment to enhance robustness for rapid motions. M²-SLT fuses asynchronous event streams from a DVS346 sensor with synchronous RGB frames to enable dual-granularity gesture representation learning and retrieval. On VECSL, M²-SLT achieves state-of-the-art performance, significantly outperforming prior RGB-only and hybrid approaches. Both the VECSL dataset and M²-SLT code are publicly released, establishing a foundational resource for event-driven sign language understanding research.

Technology Category

Application Category

📝 Abstract
Accurate sign language understanding serves as a crucial communication channel for individuals with disabilities. Current sign language translation algorithms predominantly rely on RGB frames, which may be limited by fixed frame rates, variable lighting conditions, and motion blur caused by rapid hand movements. Inspired by the recent successful application of event cameras in other fields, we propose to leverage event streams to assist RGB cameras in capturing gesture data, addressing the various challenges mentioned above. Specifically, we first collect a large-scale RGB-Event sign language translation dataset using the DVS346 camera, termed VECSL, which contains 15,676 RGB-Event samples, 15,191 glosses, and covers 2,568 Chinese characters. These samples were gathered across a diverse range of indoor and outdoor environments, capturing multiple viewing angles, varying light intensities, and different camera motions. Due to the absence of benchmark algorithms for comparison in this new task, we retrained and evaluated multiple state-of-the-art SLT algorithms, and believe that this benchmark can effectively support subsequent related research. Additionally, we propose a novel RGB-Event sign language translation framework (i.e., M$^2$-SLT) that incorporates fine-grained micro-sign and coarse-grained macro-sign retrieval, achieving state-of-the-art results on the proposed dataset. Both the source code and dataset will be released on https://github.com/Event-AHU/OpenESL.
Problem

Research questions and friction points this paper is trying to address.

Improve sign language translation accuracy using event streams.
Address limitations of RGB frames in sign language translation.
Develop benchmark dataset and algorithms for RGB-Event sign language translation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses event streams with RGB for sign language
Introduces VECSL dataset with RGB-Event samples
Proposes M$^2$-SLT framework for sign translation
🔎 Similar Papers
No similar papers found.
X
Xiao Wang
School of Computer Science and Technology, Anhui University, Hefei, China
Y
Yuehang Li
School of Computer Science and Technology, Anhui University, Hefei, China
Fuling Wang
Fuling Wang
Anhui University
Medical Report Generation
B
Bo Jiang
School of Computer Science and Technology, Anhui University, Hefei, China
Yaowei Wang
Yaowei Wang
The Hong Kong Polytechnic University
Y
Yonghong Tian
Peng Cheng Laboratory, Shenzhen, China; National Key Laboratory for Multimedia Information Processing, Peking University, China; School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China
Jin Tang
Jin Tang
Anhui University
Computer visionintelligent video analysis
B
Bin Luo
School of Computer Science and Technology, Anhui University, Hefei, China