GAP: Gaussianize Any Point Clouds with Text Guidance

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the end-to-end conversion of uncolored 3D point clouds into high-fidelity 3D Gaussian Splatting (3DGS) representations. We propose a text-guided multi-view optimization framework that jointly leverages a depth-aware image diffusion model—providing cross-view-consistent, texture-rich appearance priors—and surface-anchored geometric constraints—ensuring geometric fidelity via point cloud surface projection. Additionally, we introduce a diffusion-based inpainting strategy to robustly reconstruct occluded regions. To our knowledge, this is the first method enabling high-quality, geometry-consistent Gaussianization of arbitrary uncolored point clouds under text guidance. Extensive evaluations on synthetic data, real-world scans, and large-scale scenes demonstrate superior rendering quality and structural integrity over state-of-the-art approaches, particularly for fine-grained modeling of complex geometries.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) has demonstrated its advantages in achieving fast and high-quality rendering. As point clouds serve as a widely-used and easily accessible form of 3D representation, bridging the gap between point clouds and Gaussians becomes increasingly important. Recent studies have explored how to convert the colored points into Gaussians, but directly generating Gaussians from colorless 3D point clouds remains an unsolved challenge. In this paper, we propose GAP, a novel approach that gaussianizes raw point clouds into high-fidelity 3D Gaussians with text guidance. Our key idea is to design a multi-view optimization framework that leverages a depth-aware image diffusion model to synthesize consistent appearances across different viewpoints. To ensure geometric accuracy, we introduce a surface-anchoring mechanism that effectively constrains Gaussians to lie on the surfaces of 3D shapes during optimization. Furthermore, GAP incorporates a diffuse-based inpainting strategy that specifically targets at completing hard-to-observe regions. We evaluate GAP on the Point-to-Gaussian generation task across varying complexity levels, from synthetic point clouds to challenging real-world scans, and even large-scale scenes. Project Page: https://weiqi-zhang.github.io/GAP.
Problem

Research questions and friction points this paper is trying to address.

Convert colorless point clouds to 3D Gaussians
Ensure geometric accuracy with surface constraints
Complete hard-to-observe regions via inpainting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-view optimization with depth-aware diffusion
Surface-anchoring mechanism for geometric accuracy
Diffuse-based inpainting for hard-to-observe regions
🔎 Similar Papers
2024-03-14International Journal of Computer VisionCitations: 9
Weiqi Zhang
Weiqi Zhang
Tsinghua University
3D Computer VisionGenerative Model
Junsheng Zhou
Junsheng Zhou
Tsinghua University
3D computer vision
H
Haotian Geng
School of Software, Tsinghua University, Beijing, China
W
Wenyuan Zhang
School of Software, Tsinghua University, Beijing, China
Y
Yu-Shen Liu
School of Software, Tsinghua University, Beijing, China