DIDiffGes: Decoupled Semi-Implicit Diffusion Models for Real-time Gesture Generation from Speech

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models for speech-driven gesture generation suffer from excessive sampling steps, hindering real-time deployment. To address this, we propose the Decoupled Semi-Implicit Diffusion (DISD) framework—the first to explicitly decouple the marginal distributions of body and hand motions: a GAN implicitly models the marginal distributions, while an L2 loss explicitly learns the conditional distribution. Furthermore, we introduce root-noise-conditioned denoising, wherein local body representations guide stable and efficient denoising. DISD achieves high-fidelity, expressive, and photorealistic gestures with only 10 denoising steps—100× faster than state-of-the-art diffusion-based methods—significantly reducing computational overhead. A user study demonstrates that DISD outperforms existing SOTA approaches in anthropomorphism, motion appropriateness, and stylistic consistency.

Technology Category

Application Category

📝 Abstract
Diffusion models have demonstrated remarkable synthesis quality and diversity in generating co-speech gestures. However, the computationally intensive sampling steps associated with diffusion models hinder their practicality in real-world applications. Hence, we present DIDiffGes, for a Decoupled Semi-Implicit Diffusion model-based framework, that can synthesize high-quality, expressive gestures from speech using only a few sampling steps. Our approach leverages Generative Adversarial Networks (GANs) to enable large-step sampling for diffusion model. We decouple gesture data into body and hands distributions and further decompose them into marginal and conditional distributions. GANs model the marginal distribution implicitly, while L2 reconstruction loss learns the conditional distributions exciplictly. This strategy enhances GAN training stability and ensures expressiveness of generated full-body gestures. Our framework also learns to denoise root noise conditioned on local body representation, guaranteeing stability and realism. DIDiffGes can generate gestures from speech with just 10 sampling steps, without compromising quality and expressiveness, reducing the number of sampling steps by a factor of 100 compared to existing methods. Our user study reveals that our method outperforms state-of-the-art approaches in human likeness, appropriateness, and style correctness. Project is https://cyk990422.github.io/DIDiffGes.
Problem

Research questions and friction points this paper is trying to address.

Real-time gesture generation from speech with few sampling steps
Decoupling gesture data into body and hands distributions
Enhancing GAN training stability for expressive full-body gestures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupled Semi-Implicit Diffusion for gesture generation
GANs enable large-step sampling in diffusion models
Denoises root noise for stability and realism
🔎 Similar Papers
No similar papers found.