🤖 AI Summary
Sentence-level Bangla Sign Language (BdSL) translation has long been hindered by the scarcity of large-scale parallel corpora, forcing reliance on word- or character-level recognition. To address this low-resource challenge, we introduce Bangla-SGP—the first sentence-level BdSL-to-text parallel dataset—comprising 1,000 expert-annotated and 3,000 linguistically grounded synthetic sentence pairs. We propose a novel linguistics-driven Retrieval-Augmented Generation (RAG) framework that integrates syntactic and morphological rules to enable high-fidelity data synthesis, establishing a “professional annotation + controllable synthesis” paradigm. Fine-tuning mBART50, mT5, and GPT-4.1-nano on Bangla-SGP, we conduct systematic evaluation using BLEU and validate cross-dataset consistency on RWTH-PHOENIX-2014T. This work delivers the first reproducible end-to-end baseline for BdSL translation, bridging a critical gap in continuous sign language translation research.
📝 Abstract
Bangla Sign Language (BdSL) translation represents a low-resource NLP task due to the lack of large-scale datasets that address sentence-level translation. Correspondingly, existing research in this field has been limited to word and alphabet level detection. In this work, we introduce Bangla-SGP, a novel parallel dataset consisting of 1,000 human-annotated sentence-gloss pairs which was augmented with around 3,000 synthetically generated pairs using syntactic and morphological rules through a rule-based Retrieval-Augmented Generation (RAG) pipeline. The gloss sequences of the spoken Bangla sentences are made up of individual glosses which are Bangla sign supported words and serve as an intermediate representation for a continuous sign. Our dataset consists of 1000 high quality Bangla sentences that are manually annotated into a gloss sequence by a professional signer. The augmentation process incorporates rule-based linguistic strategies and prompt engineering techniques that we have adopted by critically analyzing our human annotated sentence-gloss pairs and by working closely with our professional signer. Furthermore, we fine-tune several transformer-based models such as mBart50, Google mT5, GPT4.1-nano and evaluate their sentence-to-gloss translation performance using BLEU scores, based on these evaluation metrics we compare the model's gloss-translation consistency across our dataset and the RWTH-PHOENIX-2014T benchmark.