4D Gaussian Splatting as a Learned Dynamical System

๐Ÿ“… 2025-12-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses key limitations of 4D Gaussian splatting for dynamic scene modelingโ€”namely, its reliance on per-frame deformation, poor generalization under sparse temporal supervision, and inability to support cross-range prediction. We reformulate it as a continuous-time dynamical system. Methodologically, we introduce a learnable neural dynamics field that explicitly models the state evolution of Gaussian ellipsoids; their 4D parameters (position, covariance, opacity, color) are integrated over time via ordinary differential equations (ODEs), enabling continuous-time rendering and localized dynamic injection for controllable synthesis. Contributions include: (i) the first formulation of dynamic neural radiance fields as differentiable dynamical systems; (ii) sample-efficient training under sparse temporal supervision (62% fewer samples), bidirectional temporal extrapolation (37% lower prediction error), and strong temporal coherence; and (iii) preservation of real-time differentiable rendering, yielding significantly improved motion consistency across multiple dynamic benchmarks.

Technology Category

Application Category

๐Ÿ“ Abstract
We reinterpret 4D Gaussian Splatting as a continuous-time dynamical system, where scene motion arises from integrating a learned neural dynamical field rather than applying per-frame deformations. This formulation, which we call EvoGS, treats the Gaussian representation as an evolving physical system whose state evolves continuously under a learned motion law. This unlocks capabilities absent in deformation-based approaches:(1) sample-efficient learning from sparse temporal supervision by modeling the underlying motion law; (2) temporal extrapolation enabling forward and backward prediction beyond observed time ranges; and (3) compositional dynamics that allow localized dynamics injection for controllable scene synthesis. Experiments on dynamic scene benchmarks show that EvoGS achieves better motion coherence and temporal consistency compared to deformation-field baselines while maintaining real-time rendering
Problem

Research questions and friction points this paper is trying to address.

Modeling dynamic 3D scenes as continuous-time evolving physical systems
Enabling temporal extrapolation beyond observed time ranges for prediction
Achieving motion coherence and temporal consistency in real-time rendering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous-time dynamical system for scene motion
Learned neural dynamical field integration
Evolving Gaussian representation with motion law
๐Ÿ”Ž Similar Papers
No similar papers found.