TRiGS: Temporal Rigid-Body Motion for Scalable 4D Gaussian Splatting

πŸ“… 2026-04-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing 4D Gaussian splatting methods suffer from temporal discontinuities and unbounded growth in Gaussian counts due to piecewise-linear velocity approximations and short temporal windows, hindering scalability to long videos. This work proposes TRiGS, the first approach to employ a continuous rigid-body motion representation for dynamic scene modeling. By leveraging SE(3) transformations, hierarchical BΓ©zier residuals, and learnable local anchor points, TRiGS endows each Gaussian primitive with geometrically consistent temporal evolution. The method preserves long-term temporal coherence, effectively curbs the proliferation of Gaussians, and achieves high-fidelity rendering on standard benchmarks. Notably, TRiGS scales successfully to video sequences of 600–1200 frames, significantly outperforming existing techniques in both temporal stability and memory efficiency.
πŸ“ Abstract
Recent 4D Gaussian Splatting (4DGS) methods achieve impressive dynamic scene reconstruction but often rely on piecewise linear velocity approximations and short temporal windows. This disjointed modeling leads to severe temporal fragmentation, forcing primitives to be repeatedly eliminated and regenerated to track complex nonlinear dynamics. This makeshift approximation eliminates the long-term temporal identity of objects and causes an inevitable proliferation of Gaussians, hindering scalability to extended video sequences. To address this, we propose TRiGS, a novel 4D representation that utilizes unified, continuous geometric transformations. By integrating $SE(3)$ transformations, hierarchical Bezier residuals, and learnable local anchors, TRiGS models geometrically consistent rigid motions for individual primitives. This continuous formulation preserves temporal identity and effectively mitigates unbounded memory growth. Extensive experiments demonstrate that TRiGS achieves high fidelity rendering on standard benchmarks while uniquely scaling to extended video sequences (e.g., 600 to 1200 frames) without severe memory bottlenecks, significantly outperforming prior works in temporal stability.
Problem

Research questions and friction points this paper is trying to address.

4D Gaussian Splatting
temporal fragmentation
nonlinear dynamics
memory scalability
temporal identity
Innovation

Methods, ideas, or system contributions that make the work stand out.

4D Gaussian Splatting
Temporal Coherence
SE(3) Transformations
Bezier Residuals
Scalable Dynamic Reconstruction
πŸ”Ž Similar Papers
S
Suwoong Yeom
Sogang University
J
Joonsik Nam
Sogang University
S
Seunggyu Choi
Sogang University
L
Lucas Yunkyu Lee
POSTECH
S
Sangmin Kim
Seoul National University
Jaesik Park
Jaesik Park
Seoul National University, CSE & IPAI
Computer VisionComputer GraphicsMachine Learning
Joonsoo Kim
Joonsoo Kim
Electronics and Telecommunications Research Institute
K
Kugjin Yun
Electronics and Telecommunications Research Institute
K
Kyeongbo Kong
Pusan National University
S
Sukju Kang
Sogang University