OpenInsGaussian: Open-vocabulary Instance Gaussian Segmentation with Context-aware Cross-view Fusion

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing semantic Gaussian splatting methods face two key bottlenecks in 2D→3D semantic projection: (1) single-view masks lack contextual cues for robust prompting, and (2) multi-view feature fusion suffers from semantic inconsistency and geometric detail loss. To address these, we propose Open-Vocabulary Instance Gaussian Segmentation (OVIGS), a novel framework featuring: (1) a context-aware mask enhancement module that leverages semantic correlations to refine single-view prompts; and (2) an attention-driven cross-view feature aggregation mechanism that ensures geometric consistency and semantic alignment in 3D feature reconstruction. OVIGS tightly integrates large-scale 2D vision-language models, context-enhanced masks, differentiable Gaussian splatting rendering, and attention-based fusion—effectively mitigating semantic misalignment and information degradation. Evaluated on ScanNetV2 and S3DIS, OVIGS achieves state-of-the-art performance, demonstrating strong generalization under open-vocabulary settings and robust segmentation accuracy in complex, real-world scenes.

Technology Category

Application Category

📝 Abstract
Understanding 3D scenes is pivotal for autonomous driving, robotics, and augmented reality. Recent semantic Gaussian Splatting approaches leverage large-scale 2D vision models to project 2D semantic features onto 3D scenes. However, they suffer from two major limitations: (1) insufficient contextual cues for individual masks during preprocessing and (2) inconsistencies and missing details when fusing multi-view features from these 2D models. In this paper, we introduce extbf{OpenInsGaussian}, an extbf{Open}-vocabulary extbf{Ins}tance extbf{Gaussian} segmentation framework with Context-aware Cross-view Fusion. Our method consists of two modules: Context-Aware Feature Extraction, which augments each mask with rich semantic context, and Attention-Driven Feature Aggregation, which selectively fuses multi-view features to mitigate alignment errors and incompleteness. Through extensive experiments on benchmark datasets, OpenInsGaussian achieves state-of-the-art results in open-vocabulary 3D Gaussian segmentation, outperforming existing baselines by a large margin. These findings underscore the robustness and generality of our proposed approach, marking a significant step forward in 3D scene understanding and its practical deployment across diverse real-world scenarios.
Problem

Research questions and friction points this paper is trying to address.

Improves contextual cues for individual masks in preprocessing
Resolves inconsistencies in multi-view feature fusion from 2D models
Addresses missing details during cross-view semantic feature aggregation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Context-Aware Feature Extraction for mask augmentation
Attention-Driven Feature Aggregation for multi-view fusion
Open-vocabulary 3D Gaussian segmentation framework
🔎 Similar Papers
2024-07-03arXiv.orgCitations: 0