π€ AI Summary
Existing point cloud rendering methods rely on NeRF representations, category-specific priors, or dense input views, making it challenging to simultaneously achieve high fidelity and strong generalization under sparse point cloud conditions. This paper proposes the first end-to-end framework for direct point cloud-to-2D Gaussian mapping: taking only anεε§ sparse point cloud as input, it employs a dual-module symmetric network to jointly encode normal, color, and distance features; introduces a split decoder to dynamically optimize Gaussian parameters; and adopts a full-fragment rasterization architecture enabling cross-category zero-shot generalization. Crucially, the method requires no NeRF modeling, category priors, or post-processing. Evaluated across multiple benchmarks, it achieves state-of-the-art rendering quality, significantly improving both geometric fidelity and generalization capability for sparse point clouds.
π Abstract
Current learning-based methods predict NeRF or 3D Gaussians from point clouds to achieve photo-realistic rendering but still depend on categorical priors, dense point clouds, or additional refinements. Hence, we introduce a novel point cloud rendering method by predicting 2D Gaussians from point clouds. Our method incorporates two identical modules with an entire-patch architecture enabling the network to be generalized to multiple datasets. The module normalizes and initializes the Gaussians utilizing the point cloud information including normals, colors and distances. Then, splitting decoders are employed to refine the initial Gaussians by duplicating them and predicting more accurate results, making our methodology effectively accommodate sparse point clouds as well. Once trained, our approach exhibits direct generalization to point clouds across different categories. The predicted Gaussians are employed directly for rendering without additional refinement on the rendered images, retaining the benefits of 2D Gaussians. We conduct extensive experiments on various datasets, and the results demonstrate the superiority and generalization of our method, which achieves SOTA performance. The code is available at https://github.com/murcherful/GauPCRender}{https://github.com/murcherful/GauPCRender.