Sparse Point Cloud Patches Rendering via Splitting 2D Gaussians

πŸ“… 2025-05-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing point cloud rendering methods rely on NeRF representations, category-specific priors, or dense input views, making it challenging to simultaneously achieve high fidelity and strong generalization under sparse point cloud conditions. This paper proposes the first end-to-end framework for direct point cloud-to-2D Gaussian mapping: taking only anεŽŸε§‹ sparse point cloud as input, it employs a dual-module symmetric network to jointly encode normal, color, and distance features; introduces a split decoder to dynamically optimize Gaussian parameters; and adopts a full-fragment rasterization architecture enabling cross-category zero-shot generalization. Crucially, the method requires no NeRF modeling, category priors, or post-processing. Evaluated across multiple benchmarks, it achieves state-of-the-art rendering quality, significantly improving both geometric fidelity and generalization capability for sparse point clouds.

Technology Category

Application Category

πŸ“ Abstract
Current learning-based methods predict NeRF or 3D Gaussians from point clouds to achieve photo-realistic rendering but still depend on categorical priors, dense point clouds, or additional refinements. Hence, we introduce a novel point cloud rendering method by predicting 2D Gaussians from point clouds. Our method incorporates two identical modules with an entire-patch architecture enabling the network to be generalized to multiple datasets. The module normalizes and initializes the Gaussians utilizing the point cloud information including normals, colors and distances. Then, splitting decoders are employed to refine the initial Gaussians by duplicating them and predicting more accurate results, making our methodology effectively accommodate sparse point clouds as well. Once trained, our approach exhibits direct generalization to point clouds across different categories. The predicted Gaussians are employed directly for rendering without additional refinement on the rendered images, retaining the benefits of 2D Gaussians. We conduct extensive experiments on various datasets, and the results demonstrate the superiority and generalization of our method, which achieves SOTA performance. The code is available at https://github.com/murcherful/GauPCRender}{https://github.com/murcherful/GauPCRender.
Problem

Research questions and friction points this paper is trying to address.

Rendering sparse point clouds without dense inputs
Generalizing across multiple datasets with patch architecture
Refining 2D Gaussians for accurate sparse cloud rendering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predicts 2D Gaussians from sparse point clouds
Uses splitting decoders to refine initial Gaussians
Generalizes across categories without refinement
πŸ”Ž Similar Papers
No similar papers found.
C
Changfeng Ma
Nanjing University, Nanjing, China
R
Ran Bi
Nanjing University, Nanjing, China
G
Guo Jie
Nanjing University, Nanjing, China
Chongjun Wang
Chongjun Wang
Nanjing University
Y
Yanwen Guo
Nanjing University, Nanjing, China, School of Software, North University of China