π€ AI Summary
Existing SMPL-driven 3D Gaussian Splatting (3DGS) avatars struggle to simultaneously achieve motion flexibility and high-fidelity appearance reconstruction. To address this, we propose a hierarchical spatiotemporal sequence-conditioned frameworkβthe first to unify spatial pose modeling and fine-grained vertex-level motion modeling within non-rigid deformation-guided 3D Gaussian optimization. Our approach integrates differentiable SMPL-X deformation, hierarchical temporal sampling, and sequential pose-conditioned encoding. It significantly improves dynamic human rendering quality (β PSNR/SSIM) and cross-pose animation generalization, outperforming concurrent state-of-the-art methods on multiple dynamic human datasets. The core contribution is a deformation-aware, spatiotemporally consistent Gaussian optimization paradigm that overcomes the long-standing pose-appearance mapping bottleneck in neural human rendering.
π Abstract
3D human avatars, through the use of canonical radiance fields and per-frame observed warping, enable high-fidelity rendering and animating. However, existing methods, which rely on either spatial SMPL(-X) poses or temporal embeddings, respectively suffer from coarse rendering quality or limited animation flexibility. To address these challenges, we propose GAST, a framework that unifies 3D human modeling with 3DGS by hierarchically integrating both spatial and temporal information. Specifically, we design a sequential conditioning framework for the non-rigid warping of the human body, under whose guidance more accurate 3D Gaussians can be obtained in the observation space. Moreover, the explicit properties of Gaussians allow us to embed richer sequential information, encompassing both the coarse sequence of human poses and finer per-vertex motion details. These sequence conditions are further sampled across different temporal scales, in a coarse-to-fine manner, ensuring unbiased inputs for non-rigid warping. Experimental results demonstrate that our method combined with hierarchical spatio-temporal modeling surpasses concurrent baselines, delivering both high-quality rendering and flexible animating capabilities.