🤖 AI Summary
Existing text-driven 4D avatar generation suffers from temporal and geometric inconsistency, perceptual artifacts, motion distortion, high computational cost, and weak dynamic controllability. To address these challenges, this paper introduces the first diffusion-based triplane re-posing framework. Our method integrates skeleton-driven motion control with a two-stage diffusion architecture to explicitly model structural and motion priors; it employs triplane implicit representations coupled with autoregressive temporal modeling, enabling arbitrary-length, high-fidelity 4D sequence generation. Trained on large-scale 3D geometry and motion datasets, our approach reduces generation latency from hours to seconds. Quantitatively and qualitatively, it achieves state-of-the-art performance in geometric accuracy, visual realism, and temporal coherence. Moreover, it significantly enhances both controllability—via precise skeletal conditioning—and inference efficiency, establishing a new benchmark for scalable, high-quality text-to-4D synthesis.
📝 Abstract
With the increasing demand for 3D animation, generating high-fidelity, controllable 4D avatars from textual descriptions remains a significant challenge. Despite notable efforts in 4D generative modeling, existing methods exhibit fundamental limitations that impede their broader applicability, including temporal and geometric inconsistencies, perceptual artifacts, motion irregularities, high computational costs, and limited control over dynamics. To address these challenges, we propose TriDiff-4D, a novel 4D generative pipeline that employs diffusion-based triplane re-posing to produce high-quality, temporally coherent 4D avatars. Our model adopts an auto-regressive strategy to generate 4D sequences of arbitrary length, synthesizing each 3D frame with a single diffusion process. By explicitly learning 3D structure and motion priors from large-scale 3D and motion datasets, TriDiff-4D enables skeleton-driven 4D generation that excels in temporal consistency, motion accuracy, computational efficiency, and visual fidelity. Specifically, TriDiff-4D first generates a canonical 3D avatar and a corresponding motion sequence from a text prompt, then uses a second diffusion model to animate the avatar according to the motion sequence, supporting arbitrarily long 4D generation. Experimental results demonstrate that TriDiff-4D significantly outperforms existing methods, reducing generation time from hours to seconds by eliminating the optimization process, while substantially improving the generation of complex motions with high-fidelity appearance and accurate 3D geometry.