SemGS: Feed-Forward Semantic 3D Gaussian Splatting from Sparse Views for Generalizable Scene Understanding

๐Ÿ“… 2026-03-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing methods for 3D semantic reconstruction often rely on dense multi-view inputs and scene-specific optimization, limiting their efficiency and generalization in real-world scenarios. This work proposes SemGS, a feed-forward framework that directly reconstructs semantic 3D Gaussian representations from sparse input views and enables semantic map synthesis from novel viewpoints. SemGS employs a dual-branch architecture with shared shallow CNNs, a camera-aware attention mechanism, and a dual-Gaussian representation, complemented by a region-smoothness loss to enhance semantic consistencyโ€”all without requiring test-time optimization. The method achieves state-of-the-art performance across multiple benchmarks, demonstrating both fast inference speed and strong generalization capabilities in both synthetic and real-world scenes.

Technology Category

Application Category

๐Ÿ“ Abstract
Semantic understanding of 3D scenes is essential for robots to operate effectively and safely in complex environments. Existing methods for semantic scene reconstruction and semantic-aware novel view synthesis often rely on dense multi-view inputs and require scene-specific optimization, limiting their practicality and scalability in real-world applications. To address these challenges, we propose SemGS, a feed-forward framework for reconstructing generalizable semantic fields from sparse image inputs. SemGS uses a dual-branch architecture to extract color and semantic features, where the two branches share shallow CNN layers, allowing semantic reasoning to leverage textural and structural cues in color appearance. We also incorporate a camera-aware attention mechanism into the feature extractor to explicitly model geometric relationships between camera viewpoints. The extracted features are decoded into dual-Gaussians that share geometric consistency while preserving branch-specific attributes, and further rasterized to synthesize semantic maps under novel viewpoints. Additionally, we introduce a regional smoothness loss to enhance semantic coherence. Experiments show that SemGS achieves state-of-the-art performance on benchmark datasets, while providing rapid inference and strong generalization capabilities across diverse synthetic and real-world scenarios.
Problem

Research questions and friction points this paper is trying to address.

semantic scene reconstruction
novel view synthesis
sparse views
generalizable scene understanding
3D Gaussian splatting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic 3D Gaussian Splatting
Sparse-view Reconstruction
Feed-forward Framework
Camera-aware Attention
Dual-branch Architecture
๐Ÿ”Ž Similar Papers
No similar papers found.
Sheng Ye
Sheng Ye
Computer Science, Tsinghua University
3D Vision3D ReconstructionHuman Animation
Z
Zhen-Hui Dong
Department of Computer Science, Tsinghua University, Beijing, China
R
Ruoyu Fan
Department of Computer Science, Tsinghua University, Beijing, China
T
Tian Lv
Department of Computer Science, Tsinghua University, Beijing, China
Yong-Jin Liu
Yong-Jin Liu
Professor of College of Mathematics and Computer Science at Fuzhou University
Mathematical ProgrammingStatistical OptimizationNumerical Computation