Physics-Informed Deformable Gaussian Splatting: Towards Unified Constitutive Laws for Time-Evolving Material Field

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of modeling diverse physical motions in monocular video-driven dynamic 3D Gaussian Splatting (3DGS), this paper proposes PhysGauss: a framework that models each Gaussian as a Lagrangian material point with time-varying constitutive parameters, and introduces, for the first time, a time-evolving material field to characterize its physical response. Methodologically, PhysGauss integrates static-dynamic decoupled 4D hash encoding, differentiable deformed Gaussian splatting, Lagrangian particle flow modeling, and enforces physical consistency via the Cauchy momentum equation residual, jointly supervised by 2D optical flow and camera-motion compensation in an end-to-end manner. Evaluated on both custom and public dynamic datasets, PhysGauss significantly improves physical plausibility and geometric accuracy in novel-view synthesis, accelerates convergence by 37%, and demonstrates superior generalization—establishing a new paradigm for deep integration of data-driven rendering and physics-based modeling.

Technology Category

Application Category

📝 Abstract
Recently, 3D Gaussian Splatting (3DGS), an explicit scene representation technique, has shown significant promise for dynamic novel-view synthesis from monocular video input. However, purely data-driven 3DGS often struggles to capture the diverse physics-driven motion patterns in dynamic scenes. To fill this gap, we propose Physics-Informed Deformable Gaussian Splatting (PIDG), which treats each Gaussian particle as a Lagrangian material point with time-varying constitutive parameters and is supervised by 2D optical flow via motion projection. Specifically, we adopt static-dynamic decoupled 4D decomposed hash encoding to reconstruct geometry and motion efficiently. Subsequently, we impose the Cauchy momentum residual as a physics constraint, enabling independent prediction of each particle's velocity and constitutive stress via a time-evolving material field. Finally, we further supervise data fitting by matching Lagrangian particle flow to camera-compensated optical flow, which accelerates convergence and improves generalization. Experiments on a custom physics-driven dataset as well as on standard synthetic and real-world datasets demonstrate significant gains in physical consistency and monocular dynamic reconstruction quality.
Problem

Research questions and friction points this paper is trying to address.

Dynamic novel-view synthesis struggles with physics-driven motion patterns
Data-driven 3DGS fails to capture time-evolving material field behavior
Monocular reconstruction lacks physical consistency in dynamic scene modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-Informed Deformable Gaussian Splatting for material fields
Static-dynamic decoupled 4D hash encoding for geometry
Cauchy momentum residual constraint for particle physics
🔎 Similar Papers
No similar papers found.
H
Haoqin Hong
USTC, China
D
Ding Fan
USTC, China
F
Fubin Dou
USTC, China
Z
Zhi-Li Zhou
UIUC, USA
H
Haoran Sun
USTC, China
Congcong Zhu
Congcong Zhu
USTC
Multimedia Understanding
J
Jingrun Chen
Suzhou Institute for Advanced Research, USTC, China; Suzhou Big Data & AI Research and Engineering Center, China