PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction

📅 2024-06-10
🏛️ IEEE Transactions on Visualization and Computer Graphics
📈 Citations: 28
Influential: 7
📄 PDF
🤖 AI Summary
To address the limitations of 3D Gaussian Splatting (3DGS) in surface reconstruction—namely, low geometric accuracy, multi-view inconsistency, and poor illumination robustness—this paper proposes a plane-aware Gaussian representation that unifies efficient rendering with high-fidelity mesh reconstruction. Our method introduces three key innovations: (1) an unbiased depth rendering mechanism enabling differentiable depth and normal map generation; (2) joint single-view geometric priors and multi-view photometric-geometric regularization; and (3) a differentiable camera exposure compensation network to handle large-scale illumination variations. Experiments demonstrate significant improvements in mesh quality and rendering fidelity across both indoor and outdoor scenes. Our approach achieves superior geometric accuracy compared to state-of-the-art 3DGS-based reconstruction methods, while also outperforming NeRF-based approaches in training and rendering speed.

Technology Category

Application Category

📝 Abstract
Recently, 3D Gaussian Splatting (3DGS) has attracted widespread attention due to its high-quality rendering, and ultra-fast training and rendering speed. However, due to the unstructured and irregular nature of Gaussian point clouds, it is difficult to guarantee geometric reconstruction accuracy and multi-view consistency simply by relying on image reconstruction loss. Although many studies on surface reconstruction based on 3DGS have emerged recently, the quality of their meshes is generally unsatisfactory. To address this problem, we propose a fast planar-based Gaussian splatting reconstruction representation (PGSR) to achieve high-fidelity surface reconstruction while ensuring high-quality rendering. Specifically, we first introduce an unbiased depth rendering method, which directly renders the distance from the camera origin to the Gaussian plane and the corresponding normal map based on the Gaussian distribution of the point cloud, and divides the two to obtain the unbiased depth. We then introduce single-view geometric, multi-view photometric, and geometric regularization to preserve global geometric accuracy. We also propose a camera exposure compensation model to cope with scenes with large illumination variations. Experiments on indoor and outdoor scenes show that the proposed method achieves fast training and rendering while maintaining high-fidelity rendering and geometric reconstruction, outperforming 3DGS-based and NeRF-based methods. Our code will be made publicly available, and more information can be found on our project page (https://zju3dv.github.io/pgsr/).
Problem

Research questions and friction points this paper is trying to address.

3D Gaussian Rendering
Surface Detail Reconstruction
Mesh Quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

PGSR
Depth Rendering
3D Gaussian Rendering
🔎 Similar Papers
No similar papers found.
Danpeng Chen
Danpeng Chen
Zhejiang University & SenseTime Research and Tetras.AI
Computer VisionDeep LearningSLAM
H
Hai Li
RayNeo
Weicai Ye
Weicai Ye
Kling Team, Kuaishou Technology
Multimodal Generative Foundation ModelsWorld Model3D VisionEmbodied AIAGI
Y
Yifan Wang
Shanghai AI Laboratory
Weijian Xie
Weijian Xie
Zhejiang University
S
Shangjin Zhai
SenseTime Research
N
Nan Wang
SenseTime Research
Haomin Liu
Haomin Liu
Sensetime
SLAMStructure from Motion
H
Hujun Bao
State Key Lab of CAD&CG, Zhejiang University
G
Guofeng Zhang
State Key Lab of CAD&CG, Zhejiang University