UniGS: Unified Geometry-Aware Gaussian Splatting for Multimodal Rendering

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly rendering RGB images, depth maps, surface normals, and semantic logits while preserving cross-scene geometric consistency in high-fidelity multimodal 3D reconstruction, this paper proposes a geometry-aware unified Gaussian rasterization framework. Our method introduces two key innovations: (1) a differentiable ray-ellipsoid intersection renderer that analytically derives gradients for depth and normal predictions, enabling joint optimization of Gaussian rotation and scale to enhance geometric fidelity; and (2) a CUDA-accelerated differentiable rasterization pipeline integrating learnable attributes and a differentiable pruning mechanism to balance efficiency and representational capacity. Evaluated on multiple benchmarks, our approach achieves state-of-the-art performance, significantly improving multimodal reconstruction quality and cross-view geometric consistency.

Technology Category

Application Category

📝 Abstract
In this paper, we propose UniGS, a unified map representation and differentiable framework for high-fidelity multimodal 3D reconstruction based on 3D Gaussian Splatting. Our framework integrates a CUDA-accelerated rasterization pipeline capable of rendering photo-realistic RGB images, geometrically accurate depth maps, consistent surface normals, and semantic logits simultaneously. We redesign the rasterization to render depth via differentiable ray-ellipsoid intersection rather than using Gaussian centers, enabling effective optimization of rotation and scale attribute through analytic depth gradients. Furthermore, we derive the analytic gradient formulation for surface normal rendering, ensuring geometric consistency among reconstructed 3D scenes. To improve computational and storage efficiency, we introduce a learnable attribute that enables differentiable pruning of Gaussians with minimal contribution during training. Quantitative and qualitative experiments demonstrate state-of-the-art reconstruction accuracy across all modalities, validating the efficacy of our geometry-aware paradigm. Source code and multimodal viewer will be available on GitHub.
Problem

Research questions and friction points this paper is trying to address.

Unified framework for multimodal 3D reconstruction
Redesign rasterization for geometrically accurate depth rendering
Improve computational efficiency through differentiable Gaussian pruning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Gaussian Splatting for multimodal 3D reconstruction
Differentiable ray-ellipsoid intersection for depth rendering
Learnable attribute enables differentiable pruning for efficiency
🔎 Similar Papers
No similar papers found.