🤖 AI Summary
Existing RGB-frame-based sign language translation (SLT) methods suffer from sensitivity to fixed frame rates, illumination variations, and motion blur. To address these limitations, this work introduces event cameras—the first such application in SLT—alongside VECSL, the first large-scale open-source RGB-Event sign language benchmark comprising 15,676 samples and 2,568 Chinese characters. We propose M²-SLT, a multi-granularity fusion framework that jointly models micro- and macro-gestures while enforcing cross-modal alignment to enhance robustness for rapid motions. M²-SLT fuses asynchronous event streams from a DVS346 sensor with synchronous RGB frames to enable dual-granularity gesture representation learning and retrieval. On VECSL, M²-SLT achieves state-of-the-art performance, significantly outperforming prior RGB-only and hybrid approaches. Both the VECSL dataset and M²-SLT code are publicly released, establishing a foundational resource for event-driven sign language understanding research.
📝 Abstract
Accurate sign language understanding serves as a crucial communication channel for individuals with disabilities. Current sign language translation algorithms predominantly rely on RGB frames, which may be limited by fixed frame rates, variable lighting conditions, and motion blur caused by rapid hand movements. Inspired by the recent successful application of event cameras in other fields, we propose to leverage event streams to assist RGB cameras in capturing gesture data, addressing the various challenges mentioned above. Specifically, we first collect a large-scale RGB-Event sign language translation dataset using the DVS346 camera, termed VECSL, which contains 15,676 RGB-Event samples, 15,191 glosses, and covers 2,568 Chinese characters. These samples were gathered across a diverse range of indoor and outdoor environments, capturing multiple viewing angles, varying light intensities, and different camera motions. Due to the absence of benchmark algorithms for comparison in this new task, we retrained and evaluated multiple state-of-the-art SLT algorithms, and believe that this benchmark can effectively support subsequent related research. Additionally, we propose a novel RGB-Event sign language translation framework (i.e., M$^2$-SLT) that incorporates fine-grained micro-sign and coarse-grained macro-sign retrieval, achieving state-of-the-art results on the proposed dataset. Both the source code and dataset will be released on https://github.com/Event-AHU/OpenESL.