🤖 AI Summary
Real-time rendering of dynamic, view-dependent scenes remains a fundamental challenge in computer graphics. To address this, we propose 7D Gaussian Splatting (7DGS), the first method to jointly model 3D spatial coordinates, 1D time, and 3D viewing directions—enabling real-time, view-dependent rendering of dynamic scenes. Our approach introduces a conditional slicing mechanism that jointly optimizes spatiotemporal and viewpoint dimensions while maintaining compatibility with static 3D Gaussian Splatting. It leverages 7D Gaussian parameterization, joint gradient optimization across all dimensions, and efficient real-time rasterization. Experiments demonstrate state-of-the-art performance: PSNR improves by up to 7.36 dB over existing methods, and rendering achieves 401 FPS on complex dynamic, view-dependent scenes. To our knowledge, 7DGS is the first method to simultaneously support 4D dynamic scene modeling and 6D view-dependent effects under real-time constraints.
📝 Abstract
Real-time rendering of dynamic scenes with view-dependent effects remains a fundamental challenge in computer graphics. While recent advances in Gaussian Splatting have shown promising results separately handling dynamic scenes (4DGS) and view-dependent effects (6DGS), no existing method unifies these capabilities while maintaining real-time performance. We present 7D Gaussian Splatting (7DGS), a unified framework representing scene elements as seven-dimensional Gaussians spanning position (3D), time (1D), and viewing direction (3D). Our key contribution is an efficient conditional slicing mechanism that transforms 7D Gaussians into view- and time-conditioned 3D Gaussians, maintaining compatibility with existing 3D Gaussian Splatting pipelines while enabling joint optimization. Experiments demonstrate that 7DGS outperforms prior methods by up to 7.36 dB in PSNR while achieving real-time rendering (401 FPS) on challenging dynamic scenes with complex view-dependent effects. The project page is: https://gaozhongpai.github.io/7dgs/.