π€ AI Summary
Existing sign language production (SLP) methods rely on gloss as an intermediate linguistic representation, suffering from language specificity and severe scarcity of gloss annotations, which hinders generalization. To address this, we propose Text2SignDiffβthe first gloss-free, end-to-end text-to-sign generation framework, built upon a non-autoregressive latent diffusion model. Our approach introduces a cross-modal alignment module that constructs a unified latent space jointly embedding textual semantics and sign pose dynamics, thereby eliminating reliance on gloss and mitigating error propagation. The architecture integrates a text encoder with a pose decoder to directly map spoken-language text to temporally coherent 3D sign pose sequences. Evaluated on PHOENIX14T and How2Sign, Text2SignDiff achieves state-of-the-art performance, significantly improving generation accuracy, temporal smoothness, and contextual consistency. This work advances robust, scalable digital communication support for Deaf and hard-of-hearing communities.
π Abstract
Sign language production (SLP) aims to translate spoken language sentences into a sequence of pose frames in a sign language, bridging the communication gap and promoting digital inclusion for deaf and hard-of-hearing communities. Existing methods typically rely on gloss, a symbolic representation of sign language words or phrases that serves as an intermediate step in SLP. This limits the flexibility and generalization of SLP, as gloss annotations are often unavailable and language-specific. Therefore, we present a novel diffusion-based generative approach - Text2Sign Diffusion (Text2SignDiff) for gloss-free SLP. Specifically, a gloss-free latent diffusion model is proposed to generate sign language sequences from noisy latent sign codes and spoken text jointly, reducing the potential error accumulation through a non-autoregressive iterative denoising process. We also design a cross-modal signing aligner that learns a shared latent space to bridge visual and textual content in sign and spoken languages. This alignment supports the conditioned diffusion-based process, enabling more accurate and contextually relevant sign language generation without gloss. Extensive experiments on the commonly used PHOENIX14T and How2Sign datasets demonstrate the effectiveness of our method, achieving the state-of-the-art performance.