🤖 AI Summary
Existing real-time open-vocabulary 3D scene understanding methods suffer from low instance segmentation accuracy, static semantic updates, and weak responses to complex language queries. To address these challenges, this paper proposes the first real-time open-vocabulary 3D understanding system designed for dynamic environments. Our method introduces three key innovations: (1) an instance-area-adaptive semantic caching mechanism enabling global label evolution; (2) a dual-path cross-modal encoding framework that jointly models object attributes and environmental context; and (3) a robust perception pipeline integrating TSDF voxel reconstruction with foundation-model confidence map fusion. Extensive experiments on ICL, Replica, ScanNet, and ScanNet++ demonstrate significant improvements in semantic segmentation accuracy and open-ended query response latency, consistently outperforming state-of-the-art baselines across all major metrics.
📝 Abstract
Real-time open-vocabulary scene understanding is essential for efficient 3D perception in applications such as vision-language navigation, embodied intelligence, and augmented reality. However, existing methods suffer from imprecise instance segmentation, static semantic updates, and limited handling of complex queries. To address these issues, we present OpenFusion++, a TSDF-based real-time 3D semantic-geometric reconstruction system. Our approach refines 3D point clouds by fusing confidence maps from foundational models, dynamically updates global semantic labels via an adaptive cache based on instance area, and employs a dual-path encoding framework that integrates object attributes with environmental context for precise query responses. Experiments on the ICL, Replica, ScanNet, and ScanNet++ datasets demonstrate that OpenFusion++ significantly outperforms the baseline in both semantic accuracy and query responsiveness.