🤖 AI Summary
Existing 4D Gaussian splatting methods struggle to model motion uncertainty in dynamic scenes and fail to reliably estimate motion under sparse observations or at unseen time instances. This work introduces, for the first time, variational Gaussian processes into the 4D Gaussian splatting framework, leveraging spatio-temporal kernel functions and inducing point approximations to probabilistically model dynamic geometry and appearance. The proposed approach not only quantifies motion uncertainty and identifies regions of high ambiguity but also effectively completes missing motion trajectories and enables temporal extrapolation. By doing so, it achieves high-fidelity reconstruction while advancing the integration of probabilistic modeling with neural scene representations.
📝 Abstract
We present GP-4DGS, a novel framework that integrates Gaussian Processes (GPs) into 4D Gaussian Splatting (4DGS) for principled probabilistic modeling of dynamic scenes. While existing 4DGS methods focus on deterministic reconstruction, they are inherently limited in capturing motion ambiguity and lack mechanisms to assess prediction reliability. By leveraging the kernel-based probabilistic nature of GPs, our approach introduces three key capabilities: (i) uncertainty quantification for motion predictions, (ii) motion estimation for unobserved or sparsely sampled regions, and (iii) temporal extrapolation beyond observed training frames. To scale GPs to the large number of Gaussian primitives in 4DGS, we design spatio-temporal kernels that capture the correlation structure of deformation fields and adopt variational Gaussian Processes with inducing points for tractable inference. Our experiments show that GP-4DGS enhances reconstruction quality while providing reliable uncertainty estimates that effectively identify regions of high motion ambiguity. By addressing these challenges, our work takes a meaningful step toward bridging probabilistic modeling and neural graphics.