🤖 AI Summary
Addressing the challenge of balancing temporal coherence and sampling efficiency in real-time speech-driven gesture generation, this paper proposes the Rolling Diffusion Ladder Acceleration (RDLA) framework. RDLA introduces a structured ladder-style noise scheduling scheme and multi-frame parallel denoising within a rolling-generation paradigm, enabling efficient sampling under motion-consistency constraints. Crucially, it is model-agnostic—compatible with any diffusion-based gesture generator without requiring retraining or architectural modification. Evaluated on the ZEGGS and BEAT benchmarks, RDLA achieves a 2× speedup in sampling latency while preserving high fidelity and gesture diversity. It significantly outperforms existing streaming generation methods, marking the first approach to deliver real-time, high-quality, and temporally stable speech-synchronized gesture synthesis.
📝 Abstract
Generating co-speech gestures in real time requires both temporal coherence and efficient sampling. We introduce Accelerated Rolling Diffusion, a novel framework for streaming gesture generation that extends rolling diffusion models with structured progressive noise scheduling, enabling seamless long-sequence motion synthesis while preserving realism and diversity. We further propose Rolling Diffusion Ladder Acceleration (RDLA), a new approach that restructures the noise schedule into a stepwise ladder, allowing multiple frames to be denoised simultaneously. This significantly improves sampling efficiency while maintaining motion consistency, achieving up to a 2x speedup with high visual fidelity and temporal coherence. We evaluate our approach on ZEGGS and BEAT, strong benchmarks for real-world applicability. Our framework is universally applicable to any diffusion-based gesture generation model, transforming it into a streaming approach. Applied to three state-of-the-art methods, it consistently outperforms them, demonstrating its effectiveness as a generalizable and efficient solution for real-time, high-fidelity co-speech gesture synthesis.