Advanced Sign Language Video Generation with Compressed and Quantized Multi-Condition Tokenization

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing sign language video generation (SLVG) methods rely on a single coarse-grained condition—e.g., skeletal sequences—limiting naturalness and expressiveness of generated videos. To address this, we propose the first multi-condition collaborative modeling framework for spoken-text-to-sign-language video generation. Our method introduces a fine-grained pose-and-3D-gesture joint representation and establishes a novel multi-condition discretization tokenization paradigm. We design a finite scalar quantization (FSQ) autoencoder to efficiently compress and quantize continuous motion representations into discrete tokens, and build an end-to-end text-to-fine-grained-action-token translator. The framework integrates diffusion models, multi-condition encoders, and high-fidelity 3D hand modeling. Extensive experiments demonstrate state-of-the-art performance in video quality, temporal coherence, and semantic fidelity. Code is publicly available.

Technology Category

Application Category

📝 Abstract
Sign Language Video Generation (SLVG) seeks to generate identity-preserving sign language videos from spoken language texts. Existing methods primarily rely on the single coarse condition (eg, skeleton sequences) as the intermediary to bridge the translation model and the video generation model, which limits both the naturalness and expressiveness of the generated videos. To overcome these limitations, we propose SignViP, a novel SLVG framework that incorporates multiple fine-grained conditions for improved generation fidelity. Rather than directly translating error-prone high-dimensional conditions, SignViP adopts a discrete tokenization paradigm to integrate and represent fine-grained conditions (ie, fine-grained poses and 3D hands). SignViP contains three core components. (1) Sign Video Diffusion Model is jointly trained with a multi-condition encoder to learn continuous embeddings that encapsulate fine-grained motion and appearance. (2) Finite Scalar Quantization (FSQ) Autoencoder is further trained to compress and quantize these embeddings into discrete tokens for compact representation of the conditions. (3) Multi-Condition Token Translator is trained to translate spoken language text to discrete multi-condition tokens. During inference, Multi-Condition Token Translator first translates the spoken language text into discrete multi-condition tokens. These tokens are then decoded to continuous embeddings by FSQ Autoencoder, which are subsequently injected into Sign Video Diffusion Model to guide video generation. Experimental results show that SignViP achieves state-of-the-art performance across metrics, including video quality, temporal coherence, and semantic fidelity. The code is available at https://github.com/umnooob/signvip/.
Problem

Research questions and friction points this paper is trying to address.

Generates sign language videos from spoken texts
Overcomes limitations of single coarse conditions
Improves video naturalness and expressiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-condition encoder for fine-grained motion
FSQ Autoencoder compresses embeddings into tokens
Token translator converts text to condition tokens
🔎 Similar Papers
No similar papers found.