MaDiS: Taming Masked Diffusion Language Models for Sign Language Generation

๐Ÿ“… 2026-01-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitations of existing autoregressive language models in sign language generation, which suffer from unidirectional context modeling and inefficient sequential inference. To overcome these challenges, the authors propose MaDiS, a masked diffusion-based language model that enables bidirectional context modeling and parallel multi-token generation. Key innovations include a three-stage cross-modal pretraining framework spanning token, latent, and 3D physical spaces; a temporal checkpoint-guided demasking strategy that reduces combinatorial complexity by over 10โดยน-fold; and learnable gated part-mixture embeddings with an optimized codebook. Experiments demonstrate that MaDiS significantly outperforms current methods on the CSL-Daily, Phoenix-2014T, and How2Sign benchmarks, achieving state-of-the-art results in DTW error, SiBLEU, and SiCLIP metrics while reducing inference latency by nearly 30%.

Technology Category

Application Category

๐Ÿ“ Abstract
Sign language generation (SLG) aims to translate written texts into expressive sign motions, bridging communication barriers for the Deaf and Hard-of-Hearing communities. Recent studies formulate SLG within the language modeling framework using autoregressive language models, which suffer from unidirectional context modeling and slow token-by-token inference. To address these limitations, we present MaDiS, a masked-diffusion-based language model for SLG that captures bidirectional dependencies and supports efficient parallel multi-token generation. We further introduce a tri-level cross-modal pretraining scheme that jointly learns from token-, latent-, and 3D physical-space objectives, leading to richer and more grounded sign representations. To accelerate model convergence in the fine-tuning stage, we design a novel unmasking strategy with temporal checkpoints, reducing the combinatorial complexity of unmasking orders by over $10^{41}$ times. In addition, a mixture-of-parts embedding layer is developed to effectively fuse information stored in different part-wise sign tokens through learnable gates and well-optimized codebooks. Extensive experiments on CSL-Daily, Phoenix-2014T, and How2Sign demonstrate that MaDiS achieves superior performance across multiple metrics, including DTW error and two newly introduced metrics, SiBLEU and SiCLIP, while reducing inference latency by nearly 30%. Code and models will be released on our project page.
Problem

Research questions and friction points this paper is trying to address.

Sign Language Generation
Masked Diffusion
Bidirectional Context
Parallel Generation
Autoregressive Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

masked diffusion
sign language generation
bidirectional modeling
cross-modal pretraining
parallel generation
๐Ÿ”Ž Similar Papers
No similar papers found.