🤖 AI Summary
To address limitations of existing 3D controllable human avatars—particularly in cloth sliding modeling, driving flexibility, and model compactness—this paper proposes a drivable hierarchical 3D Gaussian avatar. Methodologically, it introduces a novel tetrahedral cage-based deformation mechanism, replacing conventional linear blend skinning to enable geometry-aware articulation. A modular hierarchical framework is designed, separately modeling garments, hands, and face, and integrating a multi-layer differentiable compositing pipeline, keypoint/joint-angle conditional driving, and localized influence domain optimization. Rendering employs 3D Gaussians as primitives, effectively decoupling low-dimensional driving signals from high-cardinality primitives. Experiments demonstrate state-of-the-art PSNR/SSIM on multi-view datasets, alongside superior rendering fidelity, enhanced fine-grained control (e.g., realistic cloth sliding), and significantly reduced parameter count.
📝 Abstract
We present Drivable 3D Gaussian Avatars (D3GA), a multi-layered 3D controllable model for human bodies that utilizes 3D Gaussian primitives embedded into tetrahedral cages. The advantage of using cages compared to commonly employed linear blend skinning (LBS) is that primitives like 3D Gaussians are naturally re-oriented and their kernels are stretched via the deformation gradients of the encapsulating tetrahedron. Additional offsets are modeled for the tetrahedron vertices, effectively decoupling the low-dimensional driving poses from the extensive set of primitives to be rendered. This separation is achieved through the localized influence of each tetrahedron on 3D Gaussians, resulting in improved optimization. Using the cage-based deformation model, we introduce a compositional pipeline that decomposes an avatar into layers, such as garments, hands, or faces, improving the modeling of phenomena like garment sliding. These parts can be conditioned on different driving signals, such as keypoints for facial expressions or joint-angle vectors for garments and the body. Our experiments on two multi-view datasets with varied body shapes, clothes, and motions show higher-quality results. They surpass PSNR and SSIM metrics of other SOTA methods using the same data while offering greater flexibility and compactness.