🤖 AI Summary
To address the challenge of optimizing deformation fields in dynamic 3D scene reconstruction under complex motion, this paper proposes a spatiotemporally unconstrained 4D Gaussian representation: it discards rigid canonical spaces and fixed-topology constraints, enabling each Gaussian ellipsoid to be placed freely at arbitrary spatiotemporal locations. The core innovation lies in learning an implicit motion function to model temporal continuity, integrated with differentiable Gaussian splatting rendering, joint spatiotemporal optimization, and an adaptive Gaussian insertion/deletion mechanism. This design significantly reduces temporal redundancy and enhances expressiveness for non-rigid, multi-scale motions. Quantitatively, our method achieves PSNR gains of 2.1–3.8 dB over prior art on multiple dynamic scene benchmarks; qualitatively, it delivers state-of-the-art rendering quality and enables real-time dynamic novel-view synthesis.
📝 Abstract
This paper addresses the challenge of reconstructing dynamic 3D scenes with complex motions. Some recent works define 3D Gaussian primitives in the canonical space and use deformation fields to map canonical primitives to observation spaces, achieving real-time dynamic view synthesis. However, these methods often struggle to handle scenes with complex motions due to the difficulty of optimizing deformation fields. To overcome this problem, we propose FreeTimeGS, a novel 4D representation that allows Gaussian primitives to appear at arbitrary time and locations. In contrast to canonical Gaussian primitives, our representation possesses the strong flexibility, thus improving the ability to model dynamic 3D scenes. In addition, we endow each Gaussian primitive with an motion function, allowing it to move to neighboring regions over time, which reduces the temporal redundancy. Experiments results on several datasets show that the rendering quality of our method outperforms recent methods by a large margin. Project page: https://zju3dv.github.io/freetimegs/ .